Re: [vpp-dev] vpp + spdk - spdk crash on getting work item

2019-01-11 Thread Florin Coras
Ramaraj, 

You’ll have to check with the SPDK guys, but I think they’re still based on 
ubuntu 18.07. 

If you’re trying to run SPDK with vcl, so not using the patch that I previously 
pointed you to, probably it won’t work. VCL needs explicit configuration of 
workers and, from the crash lower, I suspect that is not done properly. 

Florin

> On Jan 11, 2019, at 10:15 AM, Ramaraj Pandian  wrote:
> 
> I am getting following panic while running SPDK target with VPP, Am I missing 
> any configuration related settings? Any insights would be helpful.
>  
> Thanks
> Ram
>  
>  
> (gdb) bt
> #0  0x7f0182108207 in __GI_raise (sig=sig@entry=6) at 
> ../nptl/sysdeps/unix/sysv/linux/raise.c:55
> #1  0x7f01821098f8 in __GI_abort () at abort.c:90
> #2  0x7f01815f7068 in os_panic () at 
> /vpp-master/src/vppinfra/unix-misc.c:176
> #3  0x7f018155821f in debugger () at /vpp-master/src/vppinfra/error.c:84
> #4  0x7f018155865a in _clib_error (how_to_die=2, function_name=0x0, 
> line_number=0, fmt=0x7f01826fa2a8 "%s:%d (%s) assertion `%s' fails")
> at /vpp-master/src/vppinfra/error.c:143
> #5  0x7f01826c8da0 in vcl_worker_get (wrk_index=4294967295) at 
> /vpp-master/src/vcl/vcl_private.h:541
> #6  0x7f01826c8fa8 in vcl_worker_get_current () at 
> /vpp-master/src/vcl/vcl_private.h:555
> #7  0x7f01826d2b4f in vppcom_epoll_create () at 
> /vpp-master/src/vcl/vppcom.c:2482
> #8  0x0046ad7e in spdk_vpp_sock_group_impl_create () at vpp.c:507
> #9  0x004a6f09 in spdk_sock_group_create () at sock.c:189
> #10 0x004816ed in spdk_nvmf_tcp_poll_group_create 
> (transport=0x968b90) at tcp.c:1273
> #11 0x0047ddd3 in spdk_nvmf_transport_poll_group_create 
> (transport=0x968b90) at transport.c:167
> #12 0x0047c80f in spdk_nvmf_poll_group_add_transport 
> (group=0x7f0179d0, transport=0x968b90) at nvmf.c:836
> #13 0x0047ae83 in spdk_nvmf_tgt_create_poll_group 
> (io_device=0x97b330, ctx_buf=0x7f0179d0) at nvmf.c:120
> #14 0x0048d642 in spdk_get_io_channel (io_device=0x97b330) at 
> thread.c:865
> #15 0x0047c153 in spdk_nvmf_poll_group_create (tgt=0x97b330) at 
> nvmf.c:630
> #16 0x004714c3 in nvmf_tgt_create_poll_group (ctx=0x0) at 
> nvmf_tgt.c:294
> #17 0x0048c99d in spdk_on_thread (ctx=0x969890) at thread.c:610
> #18 0x0048c06d in _spdk_msg_queue_run_batch (thread=0x7f0178c0, 
> max_msgs=8) at thread.c:318
> #19 0x0048c210 in spdk_thread_poll (thread=0x7f0178c0, 
> max_msgs=0) at thread.c:359
> #20 0x00487ea6 in _spdk_reactor_run (arg=0x967400) at reactor.c:320
> #21 0x00407442 in eal_thread_loop.cold.1 ()
> #22 0x7f01824a6dd5 in start_thread (arg=0x7f0180d18700) at 
> pthread_create.c:307
> #23 0x7f01821cfead in clone () at 
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
> 
> View/Reply Online (#11900): https://lists.fd.io/g/vpp-dev/message/11900 
> 
> Mute This Topic: https://lists.fd.io/mt/29018354/675152 
> 
> Group Owner: vpp-dev+ow...@lists.fd.io 
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub 
>   [fcoras.li...@gmail.com 
> ]
> -=-=-=-=-=-=-=-=-=-=-=-

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11902): https://lists.fd.io/g/vpp-dev/message/11902
Mute This Topic: https://lists.fd.io/mt/29018354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Mellanox ConnectX-4 Lx cards binding with DPDK, but not recognized by VPP #vpp

2019-01-11 Thread Ramaraj Pandian
I can see mellanox cards, with master VPP, kernel 4.19 and OFED 4.5 on Centos 
7.6. 

Thanks
Ram

-Original Message-
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Stephen 
Hemminger
Sent: Thursday, January 10, 2019 12:09 AM
To: vpp-dev@lists.fd.io
Cc: nix_...@yahoo.co.in
Subject: Re: [vpp-dev] Mellanox ConnectX-4 Lx cards binding with DPDK, but not 
recognized by VPP #vpp

On Wed, 09 Jan 2019 00:18:13 -0800
"Nixon Raj via Lists.Fd.Io"  wrote:

> *Setup:*
> 
> ·Platform – GNU/Linux
> 
> ·Kernel – 4.4.0-131-generic
> 
> ·Processor – x86_64
> 
> ·OS - Ubuntu 16.04
> 
>  
> 
> *MLNX_OFED Driver Version :*
> 
> ·4.1-1.0.2.0
> 
> Followed link :
> https://urldefense.proofpoint.com/v2/url?u=https-3A__community.mellano
> x.com_s_article_how-2Dto-2Dbuild-2Dvpp-2Dfd-2Dio-2D-2D160-2D-2Ddevelop
> ment-2Denvironment-2Dwith-2Dmellanox-2Ddpdk-2Dpmd-2Dfor-2Dconnectx-2D4
> -2Dand-2Dconnectx-2D5=DwIFaQ=JfeWlBa6VbDyTXraMENjy_b_0yKWuqQ4qY-FP
> hxK4x8w-TfgRBDyeV4hVQQBEgL2=oRHEIOpHZcsJ3cwQPnvGgpYgN-7QL1wVK1p9i5Tk
> F1M=1eXlQQK5wk5KHEbBJSBPvbyvz-hMhjwzG-Oj9OwcyRE=tBmw2gJpaI-uv05M0i
> BQhWu0kCvE9xfgds_7dMYM_c0=
> 
> Installation Successful and bind to DPDK with vfio-pci, but not 
> recognized by VPP # vppctl sh pci address      Sock VID:PID     Link 
> Speed   Driver          Product Name                    Vital Product 
> Data
> 
>      :02:00.0      15b3:1015   8.0 GT/s x4  vfio-pci
> 
>      :02:00.1      15b3:1015   8.0 GT/s x4  vfio-pci
> 
>      :03:00.0      15b3:1015   8.0 GT/s x4  vfio-pci
> 
>      :03:00.1      15b3:1015   8.0 GT/s x4  vfio-pci
> 
>      :04:00.0      8086:1539   2.5 GT/s x1  igb
> 
>      :05:00.0      8086:1539   2.5 GT/s x1  igb
> 
> 0   000:06:00.0      8086:1539   2.5 GT/s x1  igb
> 
>      :07:00.0      8086:1539   2.5 GT/s x1  igb
> 
>  
> 
>      :08:00.0      8086:1539   2.5 GT/s x1  igb
> 
> #vppctl sh int
> 
>           Name               Idx       State          Counter          
> Count
> 
>  
> 
>           local0                            0        down

Mellanox doesn't use vfio-pci (it uses infiniband verbs) so check the hardware 
table.


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11901): https://lists.fd.io/g/vpp-dev/message/11901
Mute This Topic: https://lists.fd.io/mt/28982352/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] vpp + spdk - spdk crash on getting work item

2019-01-11 Thread Ramaraj Pandian
I am getting following panic while running SPDK target with VPP, Am I missing 
any configuration related settings? Any insights would be helpful.

Thanks
Ram


(gdb) bt
#0  0x7f0182108207 in __GI_raise (sig=sig@entry=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:55
#1  0x7f01821098f8 in __GI_abort () at abort.c:90
#2  0x7f01815f7068 in os_panic () at 
/vpp-master/src/vppinfra/unix-misc.c:176
#3  0x7f018155821f in debugger () at /vpp-master/src/vppinfra/error.c:84
#4  0x7f018155865a in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x7f01826fa2a8 "%s:%d (%s) assertion `%s' fails")
at /vpp-master/src/vppinfra/error.c:143
#5  0x7f01826c8da0 in vcl_worker_get (wrk_index=4294967295) at 
/vpp-master/src/vcl/vcl_private.h:541
#6  0x7f01826c8fa8 in vcl_worker_get_current () at 
/vpp-master/src/vcl/vcl_private.h:555
#7  0x7f01826d2b4f in vppcom_epoll_create () at 
/vpp-master/src/vcl/vppcom.c:2482
#8  0x0046ad7e in spdk_vpp_sock_group_impl_create () at vpp.c:507
#9  0x004a6f09 in spdk_sock_group_create () at sock.c:189
#10 0x004816ed in spdk_nvmf_tcp_poll_group_create (transport=0x968b90) 
at tcp.c:1273
#11 0x0047ddd3 in spdk_nvmf_transport_poll_group_create 
(transport=0x968b90) at transport.c:167
#12 0x0047c80f in spdk_nvmf_poll_group_add_transport 
(group=0x7f0179d0, transport=0x968b90) at nvmf.c:836
#13 0x0047ae83 in spdk_nvmf_tgt_create_poll_group (io_device=0x97b330, 
ctx_buf=0x7f0179d0) at nvmf.c:120
#14 0x0048d642 in spdk_get_io_channel (io_device=0x97b330) at 
thread.c:865
#15 0x0047c153 in spdk_nvmf_poll_group_create (tgt=0x97b330) at 
nvmf.c:630
#16 0x004714c3 in nvmf_tgt_create_poll_group (ctx=0x0) at nvmf_tgt.c:294
#17 0x0048c99d in spdk_on_thread (ctx=0x969890) at thread.c:610
#18 0x0048c06d in _spdk_msg_queue_run_batch (thread=0x7f0178c0, 
max_msgs=8) at thread.c:318
#19 0x0048c210 in spdk_thread_poll (thread=0x7f0178c0, max_msgs=0) 
at thread.c:359
#20 0x00487ea6 in _spdk_reactor_run (arg=0x967400) at reactor.c:320
#21 0x00407442 in eal_thread_loop.cold.1 ()
#22 0x7f01824a6dd5 in start_thread (arg=0x7f0180d18700) at 
pthread_create.c:307
#23 0x7f01821cfead in clone () at 
../sysdeps/unix/sysv/linux/x86_64/clone.S:111
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11900): https://lists.fd.io/g/vpp-dev/message/11900
Mute This Topic: https://lists.fd.io/mt/29018354/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: GRE tunnel dropping MPLS packets

2019-01-11 Thread Omer Majeed
Hi Neale,

Yep, it's up and running.
Thanks man.

Best Regards,
Omer

On Fri, Jan 11, 2019, 1:33 PM Neale Ranns (nranns) 
>
> Hi Omer,
>
>
>
> Do you have your use case working now?
>
>
>
> /neale
>
>
>
> *De : * au nom de "omer.maj...@sofioni.com" <
> omer.maj...@sofioni.com>
> *Date : *vendredi 11 janvier 2019 à 02:57
> *À : *"Neale Ranns (nranns)" 
> *Cc : *Omer Majeed , "vpp-dev@lists.fd.io" <
> vpp-dev@lists.fd.io>
> *Objet : *Re: [vpp-dev] :: GRE tunnel dropping MPLS packets
>
>
>
> Hi Neale,
>
> Route for destination IP of GRE tunnel was also added as an MPLS route.
>
> And MPLS in VPP doesn't work for L2 forwarding, hence GRE tunnel was
> dropping packets.
>
> Thanks.
>
> Best Regards,
>
> Omer
>
>
>
>
>
> On 2019-01-08 17:12, Neale Ranns via Lists.Fd.Io wrote:
>
>
>
> Hi Omer,
>
>
>
> Your config looks OK. I would start debugging with a packet trace.
>
>
>
> /neale
>
>
>
>
>
> *De : * au nom de Omer Majeed <
> omer.majeed...@gmail.com>
> *Date : *lundi 7 janvier 2019 à 20:47
> *À : *"vpp-dev@lists.fd.io" 
> *Objet : *[vpp-dev] :: GRE tunnel dropping MPLS packets
>
>
>
> Hi,
>
>
>
> I'm running VPP on Centos 7 machine (say machine A), and running an
> application on other centos 7 machine (say machine B).
>
> I've made a GRE tunnel between those 2 machines.
>
>
>
> vpp# show gre tunnel
> [0] instance 0 src 192.168.17.10 dst 192.168.17.6 fib-idx 0 sw-if-idx 8
> payload L3
>
>
>
> Made that gre0 interface mpls enabled.
>
> I added outgoing mpls routes in VPP for IPs on machine B
>
>
>
> vpp# show ip fib table 2
>
> 192.168.100.4/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:47 buckets:1 uRPF:49 to:[0:0]]
> [0] [@10]: mpls-label[2]:[25:64:0:eos]
> [@1]: mpls via 0.0.0.0 gre0: mtu:9000
> 4500fe2f196ec0a8110ac0a811068847
>   stacked-on:
> [@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000
> ac1f6b20498fdead00280800
> 192.168.100.5/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:46 buckets:1 uRPF:47 to:[0:0]]
> [0] [@10]: mpls-label[0]:[30:64:0:eos]
> [@1]: mpls via 0.0.0.0 gre0: mtu:9000
> 4500fe2f196ec0a8110ac0a811068847
>   stacked-on:
> [@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000
> ac1f6b20498fdead00280800
>
>
>
> For reverse traffic I've added MPLS routes given below
>
>
>
> vpp# show mpls fib table 0
>
> 18:eos/21 fib:0 index:29 locks:2
>   src:API refs:1 entry-flags:uRPF-exempt,
> src-flags:added,contributing,active,
> path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
>   path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
> [@0]: dst-address,unicast lookup in ipv4-VRF:2
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:32 to:[0:0]]
> [0] [@6]: mpls-disposition:[0]:[ip4, pipe]
> [@7]: dst-address,unicast lookup in ipv4-VRF:2
> 19:eos/21 fib:0 index:38 locks:2
>   src:API refs:1 entry-flags:uRPF-exempt,
> src-flags:added,contributing,active,
> path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
>   path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
> [@0]: dst-address,unicast lookup in ipv4-VRF:2
>
>  forwarding:   mpls-eos-chain
>   [@0]: dpo-load-balance: [proto:mpls index:41 buckets:1 uRPF:41 to:[0:0]]
> [0] [@6]: mpls-disposition:[9]:[ip4, pipe]
> [@7]: dst-address,unicast lookup in ipv4-VRF:2
>
>
>
> When I try to ping from machine B to an IP in machine B (VPP VRF 2)
> through that GRE tunnel, I receive packets but GRE tunnel drops the packets.
>
> vpp# show int gre0
>
> Name   Idx   State  Counter
> Count
> gre0  8 up   rx
> packets66
>  rx
> bytes  6996
>
> drops66
>
> (nil)   66
>
>
>
> Is there anything else that needs to be done to get MPLS over GRE working?
>
> Any suggestions on how to debug the issue?
>
>
>
> Thanks a lot.
>
> Best Regards,
>
> Omer
>
>
>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#11864): https://lists.fd.io/g/vpp-dev/message/11864
> Mute This Topic: https://lists.fd.io/mt/28966281/984664
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [omer.maj...@sofioni.com
> ]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11899): https://lists.fd.io/g/vpp-dev/message/11899
Mute This Topic: https://lists.fd.io/mt/28966281/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: GRE tunnel dropping MPLS packets

2019-01-11 Thread Neale Ranns via Lists.Fd.Io

Hi Omer,

Do you have your use case working now?

/neale

De :  au nom de "omer.maj...@sofioni.com" 

Date : vendredi 11 janvier 2019 à 02:57
À : "Neale Ranns (nranns)" 
Cc : Omer Majeed , "vpp-dev@lists.fd.io" 

Objet : Re: [vpp-dev] :: GRE tunnel dropping MPLS packets


Hi Neale,

Route for destination IP of GRE tunnel was also added as an MPLS route.

And MPLS in VPP doesn't work for L2 forwarding, hence GRE tunnel was dropping 
packets.

Thanks.

Best Regards,

Omer




On 2019-01-08 17:12, Neale Ranns via Lists.Fd.Io wrote:

Hi Omer,

Your config looks OK. I would start debugging with a packet trace.

/neale


De :  au nom de Omer Majeed 
Date : lundi 7 janvier 2019 à 20:47
À : "vpp-dev@lists.fd.io" 
Objet : [vpp-dev] :: GRE tunnel dropping MPLS packets

Hi,

I'm running VPP on Centos 7 machine (say machine A), and running an application 
on other centos 7 machine (say machine B).
I've made a GRE tunnel between those 2 machines.

vpp# show gre tunnel
[0] instance 0 src 192.168.17.10 dst 192.168.17.6 fib-idx 0 sw-if-idx 8 payload 
L3

Made that gre0 interface mpls enabled.
I added outgoing mpls routes in VPP for IPs on machine B

vpp# show ip fib table 2
192.168.100.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:47 buckets:1 uRPF:49 to:[0:0]]
[0] [@10]: mpls-label[2]:[25:64:0:eos]
[@1]: mpls via 0.0.0.0 gre0: mtu:9000 
4500fe2f196ec0a8110ac0a811068847
  stacked-on:
[@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000 
ac1f6b20498fdead00280800
192.168.100.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:46 buckets:1 uRPF:47 to:[0:0]]
[0] [@10]: mpls-label[0]:[30:64:0:eos]
[@1]: mpls via 0.0.0.0 gre0: mtu:9000 
4500fe2f196ec0a8110ac0a811068847
  stacked-on:
[@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000 
ac1f6b20498fdead00280800

For reverse traffic I've added MPLS routes given below

vpp# show mpls fib table 0
18:eos/21 fib:0 index:29 locks:2
  src:API refs:1 entry-flags:uRPF-exempt, src-flags:added,contributing,active,
path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
  path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
[@0]: dst-address,unicast lookup in ipv4-VRF:2

 forwarding:   mpls-eos-chain
  [@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:32 to:[0:0]]
[0] [@6]: mpls-disposition:[0]:[ip4, pipe]
[@7]: dst-address,unicast lookup in ipv4-VRF:2
19:eos/21 fib:0 index:38 locks:2
  src:API refs:1 entry-flags:uRPF-exempt, src-flags:added,contributing,active,
path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
  path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
[@0]: dst-address,unicast lookup in ipv4-VRF:2

 forwarding:   mpls-eos-chain
  [@0]: dpo-load-balance: [proto:mpls index:41 buckets:1 uRPF:41 to:[0:0]]
[0] [@6]: mpls-disposition:[9]:[ip4, pipe]
[@7]: dst-address,unicast lookup in ipv4-VRF:2

When I try to ping from machine B to an IP in machine B (VPP VRF 2) through 
that GRE tunnel, I receive packets but GRE tunnel drops the packets.
vpp# show int gre0
Name   Idx   State  Counter  Count
gre0  8 up   rx packets
66
 rx bytes   
   6996
 drops  
  66
 (nil)  
 66

Is there anything else that needs to be done to get MPLS over GRE working?
Any suggestions on how to debug the issue?

Thanks a lot.
Best Regards,
Omer

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11864): https://lists.fd.io/g/vpp-dev/message/11864
Mute This Topic: https://lists.fd.io/mt/28966281/984664
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  
[omer.maj...@sofioni.com]
-=-=-=-=-=-=-=-=-=-=-=-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11898): https://lists.fd.io/g/vpp-dev/message/11898
Mute This Topic: https://lists.fd.io/mt/28966281/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-