Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread Ravi Kerur
Hi Steven

Shared memory is setup correctly. I am seeing following errors. System
on which there is no crash it doesn't support 1G hugepages. So I have
to use 2M hugepages with following config for VPP.



(1) host
vpp# show error
   CountNode  Reason
vpp# show error
   CountNode  Reason
 5vhost-user-inputmmap failure
 5 ethernet-input l3 mac mismatch
vpp#

(2) Host VPP config
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
  no-pci

  huge-dir /dev/hugepages
  socket-mem 16,0
}

(3) Container
vpp# show error
   CountNode  Reason
 5ip4-glean   ARP requests sent
vpp#

(4) Container VPP config

unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
  no-pci
  huge-dir /dev/hugepages
  socket-mem 8,0
  vdev virtio_user0,path=/var/run/usvhost1
}

Thanks.

On Wed, Jun 6, 2018 at 1:53 PM, Steven Luong (sluong)  wrote:
> Ravi,
>
> I supposed you already checked the obvious that the vhost connection is 
> established and shared memory has at least 1 region in show vhost. For 
> traffic issue, use show error to see why packets are dropping. trace add 
> vhost-user-input and show trace to see if vhost is getting the packet.
>
> Steven
>
> On 6/6/18, 1:33 PM, "Ravi Kerur"  wrote:
>
> Damjan, Steven,
>
> I will get back to the system on which VPP is crashing and get more
> info on it later.
>
> For now, I got hold of another system (same 16.04 x86_64) and I tried
> with the same configuration
>
> VPP vhost-user on host
> VPP virtio-user on a container
>
> This time VPP didn't crash. Ping doesn't work though. Both vhost-user
> and virtio are transmitting and receiving packets. What do I need to
> enable so that ping works?
>
> (1) on host:
> show interface
>   Name   Idx   State  Counter
> Count
> VhostEthernet01down
> VhostEthernet12down
> VirtualEthernet0/0/0  3 up   rx packets
>  5
>  rx bytes
>210
>  tx packets
>  5
>  tx bytes
>210
>  drops
> 10
> local00down
> vpp# show ip arp
> vpp#
>
>
> (2) On container
> show interface
>   Name   Idx   State  Counter
> Count
> VirtioUser0/0/0   1 up   rx packets
>  5
>  rx bytes
>210
>  tx packets
>  5
>  tx bytes
>210
>  drops
> 10
> local00down
> vpp# show ip arp
> vpp#
>
> Thanks.
>
> On Wed, Jun 6, 2018 at 10:44 AM, Saxena, Nitin  
> wrote:
> > Hi Ravi,
> >
> > Sorry for diluting your topic. From your stack trace and show interface 
> output I thought you are using OCTEONTx.
> >
> > Regards,
> > Nitin
> >
> >> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
> >>
> >> Steven, Damjan, Nitin,
> >>
> >> Let me clarify so there is no confusion, since you are assisting me to
> >> get this working I will make sure we are all on same page. I believe
> >> OcteonTx is related to Cavium/ARM and I am not using it.
> >>
> >> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
> >> 2MB I had to use '--single-file-segments' option.
> >>
> >> There used to be a way in DPDK to influence compiler to compile for
> >> certain architecture f.e. 'nehalem'. I will try that option but I want
> >> to make sure steps I am executing is fine first.
> >>
> >> (1) I compile VPP (18.04) code on x86_64 system with following
> >> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
> >>
> >> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> >> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> >> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
> >> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
> >> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread steven luong via Lists.Fd.Io
Ravi,

I supposed you already checked the obvious that the vhost connection is 
established and shared memory has at least 1 region in show vhost. For traffic 
issue, use show error to see why packets are dropping. trace add 
vhost-user-input and show trace to see if vhost is getting the packet.

Steven

On 6/6/18, 1:33 PM, "Ravi Kerur"  wrote:

Damjan, Steven,

I will get back to the system on which VPP is crashing and get more
info on it later.

For now, I got hold of another system (same 16.04 x86_64) and I tried
with the same configuration

VPP vhost-user on host
VPP virtio-user on a container

This time VPP didn't crash. Ping doesn't work though. Both vhost-user
and virtio are transmitting and receiving packets. What do I need to
enable so that ping works?

(1) on host:
show interface
  Name   Idx   State  Counter
Count
VhostEthernet01down
VhostEthernet12down
VirtualEthernet0/0/0  3 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#


(2) On container
show interface
  Name   Idx   State  Counter
Count
VirtioUser0/0/0   1 up   rx packets
 5
 rx bytes
   210
 tx packets
 5
 tx bytes
   210
 drops
10
local00down
vpp# show ip arp
vpp#

Thanks.

On Wed, Jun 6, 2018 at 10:44 AM, Saxena, Nitin  
wrote:
> Hi Ravi,
>
> Sorry for diluting your topic. From your stack trace and show interface 
output I thought you are using OCTEONTx.
>
> Regards,
> Nitin
>
>> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
>>
>> Steven, Damjan, Nitin,
>>
>> Let me clarify so there is no confusion, since you are assisting me to
>> get this working I will make sure we are all on same page. I believe
>> OcteonTx is related to Cavium/ARM and I am not using it.
>>
>> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
>> 2MB I had to use '--single-file-segments' option.
>>
>> There used to be a way in DPDK to influence compiler to compile for
>> certain architecture f.e. 'nehalem'. I will try that option but I want
>> to make sure steps I am executing is fine first.
>>
>> (1) I compile VPP (18.04) code on x86_64 system with following
>> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
>>
>> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
>> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
>> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
>> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
>> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
>> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
>> rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
>> flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
>> invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts
>>
>> (2) I run VPP on the same system.
>>
>> (3) VPP on host has following startup.conf
>> unix {
>>  nodaemon
>>  log /var/log/vpp/vpp.log
>>  full-coredump
>>  cli-listen /run/vpp/cli.sock
>>  gid vpp
>> }
>>
>> api-trace {
>>  on
>> }
>>
>> api-segment {
>>  gid vpp
>> }
>>
>> dpdk {
>>  no-pci
>>
>>  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>>  vdev net_vhost1,iface=/var/run/vpp/sock2.sock
>>
>>  huge-dir /dev/hugepages_1G
>>  socket-mem 2,0
>> }
>>
>> (4) VPP vhost-user config (on host)
>> create vhost socket /var/run/vpp/sock3.sock
>> set interface state VirtualEthernet0/0/0 up
>> set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>>
>> (5) show dpdk version (Version is the same on host and container, EAL
>> params are different)
>> DPDK Version: DPDK 18.02.1
>> DPDK EAL init args:   -c 1 -n 4 --no-pci --vdev
>> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-06 Thread Nitin Saxena
Hi Ravi,

Sorry for diluting your topic. From your stack trace and show interface output 
I thought you are using OCTEONTx. 

Regards,
Nitin

> On 06-Jun-2018, at 22:10, Ravi Kerur  wrote:
> 
> Steven, Damjan, Nitin,
> 
> Let me clarify so there is no confusion, since you are assisting me to
> get this working I will make sure we are all on same page. I believe
> OcteonTx is related to Cavium/ARM and I am not using it.
> 
> DPDK/testpmd (vhost-virtio) works with both 2MB and 1GB hugepages. For
> 2MB I had to use '--single-file-segments' option.
> 
> There used to be a way in DPDK to influence compiler to compile for
> certain architecture f.e. 'nehalem'. I will try that option but I want
> to make sure steps I am executing is fine first.
> 
> (1) I compile VPP (18.04) code on x86_64 system with following
> CPUFLAGS. My system has 'avx, avx2, sse3, see4_2' for SIMD.
> 
> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c
> rdrand lahf_lm abm epb invpcid_single retpoline kaiser tpr_shadow vnmi
> flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms
> invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts
> 
> (2) I run VPP on the same system.
> 
> (3) VPP on host has following startup.conf
> unix {
>  nodaemon
>  log /var/log/vpp/vpp.log
>  full-coredump
>  cli-listen /run/vpp/cli.sock
>  gid vpp
> }
> 
> api-trace {
>  on
> }
> 
> api-segment {
>  gid vpp
> }
> 
> dpdk {
>  no-pci
> 
>  vdev net_vhost0,iface=/var/run/vpp/sock1.sock
>  vdev net_vhost1,iface=/var/run/vpp/sock2.sock
> 
>  huge-dir /dev/hugepages_1G
>  socket-mem 2,0
> }
> 
> (4) VPP vhost-user config (on host)
> create vhost socket /var/run/vpp/sock3.sock
> set interface state VirtualEthernet0/0/0 up
> set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
> 
> (5) show dpdk version (Version is the same on host and container, EAL
> params are different)
> DPDK Version: DPDK 18.02.1
> DPDK EAL init args:   -c 1 -n 4 --no-pci --vdev
> net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
> net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1G
> --master-lcore 0 --socket-mem 2,0
> 
> (6) Container is instantiated as follows
> docker run -it --privileged -v
> /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
> /dev/hugepages_1G:/dev/hugepages_1G dpdk-app-vpp:latest
> 
> (6) VPP startup.conf inside container is as follows
> unix {
>  nodaemon
>  log /var/log/vpp/vpp.log
>  full-coredump
>  cli-listen /run/vpp/cli.sock
>  gid vpp
> }
> 
> api-trace {
>  on
> }
> 
> api-segment {
>  gid vpp
> }
> 
> dpdk {
>  no-pci
>  huge-dir /dev/hugepages_1G
>  socket-mem 1,0
>  vdev virtio_user0,path=/var/run/usvhost1
> }
> 
> (7) VPP virtio-user config (on container)
> set interface state VirtioUser0/0/0  up
> set interface ip address VirtioUser0/0/0 10.1.1.2/24
> 
> (8) Ping... VP on host crashes. I sent one backtrace yesterday. Today
> morning tried again, no backtrace but following messages
> 
> Program received signal SIGSEGV, Segmentation fault.
> 0x7fd6f2ba3070 in dpdk_input_avx2 () from
> target:/usr/lib/vpp_plugins/dpdk_plugin.so
> (gdb)
> Continuing.
> 
> Program received signal SIGABRT, Aborted.
> 0x7fd734860428 in raise () from target:/lib/x86_64-linux-gnu/libc.so.6
> (gdb)
> Continuing.
> 
> Program terminated with signal SIGABRT, Aborted.
> The program no longer exists.
> (gdb) bt
> No stack.
> (gdb)
> 
> Thanks.
> 
>> On Wed, Jun 6, 2018 at 1:50 AM, Damjan Marion  wrote:
>> 
>> Now i'm completely confused, is this on x86 or octeon tx?
>> 
>> Regarding the octeon tx mempool, no idea what it is, but will not be
>> surprised that it is not compatible with the way how we use buffer memory in
>> vpp.
>> VPP expects that buffer memory is allocated by VPP and then given to DPDK
>> via rte_mempool_create_empty() and rte_mempool_populate_iova_tab().
>> 
>> On 6 Jun 2018, at 06:51, Saxena, Nitin  wrote:
>> 
>> Hi Ravi,
>> 
>> Two things to get vhost-user running on OCTEONTx
>> 
>> 1) use either 1 GB hugepages or 512 MB. This you did.
>> 
>> 2) You need one dpdk patch that I merged in dpdk-18.05 related to OcteonTx
>> MTU. You can get patch from dpdk git (search for nsaxena)
>> 
>> Hi damjan,
>> 
>> Currently we don't support Octeon TX mempool. Are you intentionally using
>> it?
>> 
>> I was about to send email regarding OCTEONTX mempool, as we enabled it and
>> running into issuea. Any pointers will be helpful as I didn't reach to the
>> root cause of the issue
>> 
>> Thanks,
>> Nitin
>> 
>> On 06-Jun-2018, at 01:40, Damjan Marion  wrote:
>> 
>> Dear Ravi,
>> 
>> Currently we don't support Octeon TX mempool. Are you intentionally using
>> it?
>> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi,

I only have an SSE machine (Ivy Bridge) and DPDK is using ring mempool as far 
as I can tell from gdb. You are using AVX2 which I don't have one to try it to 
see whether Octeontx mempool is the default mempool for AVX2. What do you put 
in dpdk in the host startup.conf? What is the output for show dpdk version?

Steven

On 6/5/18, 1:40 PM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
 wrote:

Hi Damjan,

I am not intentional using it. I am running VPP on a x86 Ubuntu server.

uname -a
4.9.77.2-rt61 #1 SMP PREEMPT RT Tue May 15 20:36:51 UTC 2018 x86_64
x86_64 x86_64 GNU/Linux

Thanks.

On Tue, Jun 5, 2018 at 1:10 PM, Damjan Marion  wrote:
> Dear Ravi,
>
> Currently we don't support Octeon TX mempool. Are you intentionally using
> it?
>
> Regards,
>
> Damjan
>
> On 5 Jun 2018, at 21:46, Ravi Kerur  wrote:
>
> Steven,
>
> I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
> assign an IP address to both vhost-user/virtio interfaces and initiate
> a ping VPP crashes.
>
> Any other mechanism available to test Tx/Rx path between Vhost and
> Virtio? Details below.
>
>
> ***On host***
> vpp#show vhost-user VirtualEthernet0/0/0
> Virtio vhost-user interfaces
> Global:
>  coalesce frames 32 time 1e-3
>  number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 3)
> virtio_net_hdr_sz 12
> features mask (0x):
> features (0x110008000):
>   VIRTIO_NET_F_MRG_RXBUF (15)
>   VIRTIO_F_INDIRECT_DESC (28)
>   VIRTIO_F_VERSION_1 (32)
>  protocol features (0x0)
>
> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>
> rx placement:
>   thread 0 on vring 1, polling
> tx placement: lock-free
>   thread 0 on vring 0
>
> Memory regions (total 1)
> region fdguest_phys_addrmemory_sizeuserspace_addr
> mmap_offsetmmap_addr
> == = == == ==
> == ==
>  0 260x7f54c000 0x4000 0x7f54c000
> 0x 0x7faf
>
> Virtqueue 0 (TX)
>  qsz 256 last_avail_idx 0 last_used_idx 0
>  avail.flags 1 avail.idx 256 used.flags 1 used.idx 0
>  kickfd 27 callfd 24 errfd -1
>
> Virtqueue 1 (RX)
>  qsz 256 last_avail_idx 0 last_used_idx 0
>  avail.flags 1 avail.idx 0 used.flags 1 used.idx 0
>  kickfd 28 callfd 25 errfd -1
>
>
> vpp#set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>
> On container**
> vpp# show interface VirtioUser0/0/0
>  Name   Idx   State  Counter
>Count
> VirtioUser0/0/0   1 up
> vpp#
> vpp# set interface ip address VirtioUser0/0/0 10.1.1.2/24
> vpp#
> vpp# ping 10.1.1.1
>
> Statistics: 5 sent, 0 received, 100% packet loss
> vpp#
>
>
> Host vpp crash with following backtrace**
> Continuing.
>
> Program received signal SIGSEGV, Segmentation fault.
> octeontx_fpa_bufpool_alloc (handle=0)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
> 57return (void *)(uintptr_t)fpavf_read64((void *)(handle +
> (gdb) bt
> #0  octeontx_fpa_bufpool_alloc (handle=0)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
> #1  octeontx_fpavf_dequeue (mp=0x7fae7fc9ab40, obj_table=0x7fb04d868880,
> n=528)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:98
> #2  0x7fb04b73bdef in rte_mempool_ops_dequeue_bulk (n=528,
> obj_table=,
>mp=0x7fae7fc9ab40)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:492
> #3  __mempool_generic_get (cache=, n=,
> obj_table=,
>mp=)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1271
> #4  rte_mempool_generic_get (cache=, n=,
>obj_table=, mp=)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1306
> #5  rte_mempool_get_bulk (n=528, obj_table=,
> mp=0x7fae7fc9ab40)
>at
> 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1339
> #6  dpdk_buffer_fill_free_list_avx2 (vm=0x7fb08ec69480
> , fl=0x7fb04cb2b100,
>min_free_buffers=)
>at 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread Ravi Kerur
Damjan, Steven,

Kindly let me know if anything I have messed up?

I have compiled VPP on x86_64 and done everything as suggested by Steven.

Thanks.

On Tue, Jun 5, 2018 at 1:39 PM, Ravi Kerur  wrote:
> Hi Damjan,
>
> I am not intentional using it. I am running VPP on a x86 Ubuntu server.
>
> uname -a
> 4.9.77.2-rt61 #1 SMP PREEMPT RT Tue May 15 20:36:51 UTC 2018 x86_64
> x86_64 x86_64 GNU/Linux
>
> Thanks.
>
> On Tue, Jun 5, 2018 at 1:10 PM, Damjan Marion  wrote:
>> Dear Ravi,
>>
>> Currently we don't support Octeon TX mempool. Are you intentionally using
>> it?
>>
>> Regards,
>>
>> Damjan
>>
>> On 5 Jun 2018, at 21:46, Ravi Kerur  wrote:
>>
>> Steven,
>>
>> I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
>> assign an IP address to both vhost-user/virtio interfaces and initiate
>> a ping VPP crashes.
>>
>> Any other mechanism available to test Tx/Rx path between Vhost and
>> Virtio? Details below.
>>
>>
>> ***On host***
>> vpp#show vhost-user VirtualEthernet0/0/0
>> Virtio vhost-user interfaces
>> Global:
>>  coalesce frames 32 time 1e-3
>>  number of rx virtqueues in interrupt mode: 0
>> Interface: VirtualEthernet0/0/0 (ifindex 3)
>> virtio_net_hdr_sz 12
>> features mask (0x):
>> features (0x110008000):
>>   VIRTIO_NET_F_MRG_RXBUF (15)
>>   VIRTIO_F_INDIRECT_DESC (28)
>>   VIRTIO_F_VERSION_1 (32)
>>  protocol features (0x0)
>>
>> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>>
>> rx placement:
>>   thread 0 on vring 1, polling
>> tx placement: lock-free
>>   thread 0 on vring 0
>>
>> Memory regions (total 1)
>> region fdguest_phys_addrmemory_sizeuserspace_addr
>> mmap_offsetmmap_addr
>> == = == == ==
>> == ==
>>  0 260x7f54c000 0x4000 0x7f54c000
>> 0x 0x7faf
>>
>> Virtqueue 0 (TX)
>>  qsz 256 last_avail_idx 0 last_used_idx 0
>>  avail.flags 1 avail.idx 256 used.flags 1 used.idx 0
>>  kickfd 27 callfd 24 errfd -1
>>
>> Virtqueue 1 (RX)
>>  qsz 256 last_avail_idx 0 last_used_idx 0
>>  avail.flags 1 avail.idx 0 used.flags 1 used.idx 0
>>  kickfd 28 callfd 25 errfd -1
>>
>>
>> vpp#set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>>
>> On container**
>> vpp# show interface VirtioUser0/0/0
>>  Name   Idx   State  Counter
>>Count
>> VirtioUser0/0/0   1 up
>> vpp#
>> vpp# set interface ip address VirtioUser0/0/0 10.1.1.2/24
>> vpp#
>> vpp# ping 10.1.1.1
>>
>> Statistics: 5 sent, 0 received, 100% packet loss
>> vpp#
>>
>>
>> Host vpp crash with following backtrace**
>> Continuing.
>>
>> Program received signal SIGSEGV, Segmentation fault.
>> octeontx_fpa_bufpool_alloc (handle=0)
>>at
>> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
>> 57return (void *)(uintptr_t)fpavf_read64((void *)(handle +
>> (gdb) bt
>> #0  octeontx_fpa_bufpool_alloc (handle=0)
>>at
>> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
>> #1  octeontx_fpavf_dequeue (mp=0x7fae7fc9ab40, obj_table=0x7fb04d868880,
>> n=528)
>>at
>> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:98
>> #2  0x7fb04b73bdef in rte_mempool_ops_dequeue_bulk (n=528,
>> obj_table=,
>>mp=0x7fae7fc9ab40)
>>at
>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:492
>> #3  __mempool_generic_get (cache=, n=,
>> obj_table=,
>>mp=)
>>at
>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1271
>> #4  rte_mempool_generic_get (cache=, n=,
>>obj_table=, mp=)
>>at
>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1306
>> #5  rte_mempool_get_bulk (n=528, obj_table=,
>> mp=0x7fae7fc9ab40)
>>at
>> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1339
>> #6  dpdk_buffer_fill_free_list_avx2 (vm=0x7fb08ec69480
>> , fl=0x7fb04cb2b100,
>>min_free_buffers=)
>>at /var/venom/rk-vpp-1804/vpp/build-data/../src/plugins/dpdk/buffer.c:228
>> #7  0x7fb08e5046ea in vlib_buffer_alloc_from_free_list (index=0
>> '\000', n_buffers=514,
>>buffers=0x7fb04cb8ec58, vm=0x7fb08ec69480 )
>>at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/buffer_funcs.h:306
>> #8  vhost_user_if_input (mode=, node=0x7fb04d0f5b80,
>> qid=,
>>vui=0x7fb04d87523c, vum=0x7fb08e9b9560 ,
>>vm=0x7fb08ec69480 )
>>at
>> /var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1644
>> #9  vhost_user_input (f=, node=,
>> vm=)
>>at
>> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread Ravi Kerur
Hi Damjan,

I am not intentional using it. I am running VPP on a x86 Ubuntu server.

uname -a
4.9.77.2-rt61 #1 SMP PREEMPT RT Tue May 15 20:36:51 UTC 2018 x86_64
x86_64 x86_64 GNU/Linux

Thanks.

On Tue, Jun 5, 2018 at 1:10 PM, Damjan Marion  wrote:
> Dear Ravi,
>
> Currently we don't support Octeon TX mempool. Are you intentionally using
> it?
>
> Regards,
>
> Damjan
>
> On 5 Jun 2018, at 21:46, Ravi Kerur  wrote:
>
> Steven,
>
> I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
> assign an IP address to both vhost-user/virtio interfaces and initiate
> a ping VPP crashes.
>
> Any other mechanism available to test Tx/Rx path between Vhost and
> Virtio? Details below.
>
>
> ***On host***
> vpp#show vhost-user VirtualEthernet0/0/0
> Virtio vhost-user interfaces
> Global:
>  coalesce frames 32 time 1e-3
>  number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 3)
> virtio_net_hdr_sz 12
> features mask (0x):
> features (0x110008000):
>   VIRTIO_NET_F_MRG_RXBUF (15)
>   VIRTIO_F_INDIRECT_DESC (28)
>   VIRTIO_F_VERSION_1 (32)
>  protocol features (0x0)
>
> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>
> rx placement:
>   thread 0 on vring 1, polling
> tx placement: lock-free
>   thread 0 on vring 0
>
> Memory regions (total 1)
> region fdguest_phys_addrmemory_sizeuserspace_addr
> mmap_offsetmmap_addr
> == = == == ==
> == ==
>  0 260x7f54c000 0x4000 0x7f54c000
> 0x 0x7faf
>
> Virtqueue 0 (TX)
>  qsz 256 last_avail_idx 0 last_used_idx 0
>  avail.flags 1 avail.idx 256 used.flags 1 used.idx 0
>  kickfd 27 callfd 24 errfd -1
>
> Virtqueue 1 (RX)
>  qsz 256 last_avail_idx 0 last_used_idx 0
>  avail.flags 1 avail.idx 0 used.flags 1 used.idx 0
>  kickfd 28 callfd 25 errfd -1
>
>
> vpp#set interface ip address VirtualEthernet0/0/0 10.1.1.1/24
>
> On container**
> vpp# show interface VirtioUser0/0/0
>  Name   Idx   State  Counter
>Count
> VirtioUser0/0/0   1 up
> vpp#
> vpp# set interface ip address VirtioUser0/0/0 10.1.1.2/24
> vpp#
> vpp# ping 10.1.1.1
>
> Statistics: 5 sent, 0 received, 100% packet loss
> vpp#
>
>
> Host vpp crash with following backtrace**
> Continuing.
>
> Program received signal SIGSEGV, Segmentation fault.
> octeontx_fpa_bufpool_alloc (handle=0)
>at
> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
> 57return (void *)(uintptr_t)fpavf_read64((void *)(handle +
> (gdb) bt
> #0  octeontx_fpa_bufpool_alloc (handle=0)
>at
> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
> #1  octeontx_fpavf_dequeue (mp=0x7fae7fc9ab40, obj_table=0x7fb04d868880,
> n=528)
>at
> /var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:98
> #2  0x7fb04b73bdef in rte_mempool_ops_dequeue_bulk (n=528,
> obj_table=,
>mp=0x7fae7fc9ab40)
>at
> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:492
> #3  __mempool_generic_get (cache=, n=,
> obj_table=,
>mp=)
>at
> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1271
> #4  rte_mempool_generic_get (cache=, n=,
>obj_table=, mp=)
>at
> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1306
> #5  rte_mempool_get_bulk (n=528, obj_table=,
> mp=0x7fae7fc9ab40)
>at
> /var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1339
> #6  dpdk_buffer_fill_free_list_avx2 (vm=0x7fb08ec69480
> , fl=0x7fb04cb2b100,
>min_free_buffers=)
>at /var/venom/rk-vpp-1804/vpp/build-data/../src/plugins/dpdk/buffer.c:228
> #7  0x7fb08e5046ea in vlib_buffer_alloc_from_free_list (index=0
> '\000', n_buffers=514,
>buffers=0x7fb04cb8ec58, vm=0x7fb08ec69480 )
>at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/buffer_funcs.h:306
> #8  vhost_user_if_input (mode=, node=0x7fb04d0f5b80,
> qid=,
>vui=0x7fb04d87523c, vum=0x7fb08e9b9560 ,
>vm=0x7fb08ec69480 )
>at
> /var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1644
> #9  vhost_user_input (f=, node=,
> vm=)
>at
> /var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1947
> #10 vhost_user_input_avx2 (vm=, node=,
> frame=)
>at
> /var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1972
> #11 0x7fb08ea166b3 in dispatch_node (last_time_stamp= out>, frame=0x0,
>dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread Ravi Kerur
Steven,

I managed to get Tx/Rx rings setup with 1GB hugepages. However, when I
assign an IP address to both vhost-user/virtio interfaces and initiate
a ping VPP crashes.

Any other mechanism available to test Tx/Rx path between Vhost and
Virtio? Details below.


***On host***
vpp#show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 12
 features mask (0x):
 features (0x110008000):
   VIRTIO_NET_F_MRG_RXBUF (15)
   VIRTIO_F_INDIRECT_DESC (28)
   VIRTIO_F_VERSION_1 (32)
  protocol features (0x0)

 socket filename /var/run/vpp/sock3.sock type server errno "Success"

 rx placement:
   thread 0 on vring 1, polling
 tx placement: lock-free
   thread 0 on vring 0

 Memory regions (total 1)
 region fdguest_phys_addrmemory_sizeuserspace_addr
mmap_offsetmmap_addr
 == = == == ==
== ==
  0 260x7f54c000 0x4000 0x7f54c000
0x 0x7faf

 Virtqueue 0 (TX)
  qsz 256 last_avail_idx 0 last_used_idx 0
  avail.flags 1 avail.idx 256 used.flags 1 used.idx 0
  kickfd 27 callfd 24 errfd -1

 Virtqueue 1 (RX)
  qsz 256 last_avail_idx 0 last_used_idx 0
  avail.flags 1 avail.idx 0 used.flags 1 used.idx 0
  kickfd 28 callfd 25 errfd -1


vpp#set interface ip address VirtualEthernet0/0/0 10.1.1.1/24

On container**
vpp# show interface VirtioUser0/0/0
  Name   Idx   State  Counter
Count
VirtioUser0/0/0   1 up
vpp#
vpp# set interface ip address VirtioUser0/0/0 10.1.1.2/24
vpp#
vpp# ping 10.1.1.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp#


Host vpp crash with following backtrace**
Continuing.

Program received signal SIGSEGV, Segmentation fault.
octeontx_fpa_bufpool_alloc (handle=0)
at 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
57return (void *)(uintptr_t)fpavf_read64((void *)(handle +
(gdb) bt
#0  octeontx_fpa_bufpool_alloc (handle=0)
at 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:57
#1  octeontx_fpavf_dequeue (mp=0x7fae7fc9ab40, obj_table=0x7fb04d868880, n=528)
at 
/var/venom/rk-vpp-1804/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/mempool/octeontx/rte_mempool_octeontx.c:98
#2  0x7fb04b73bdef in rte_mempool_ops_dequeue_bulk (n=528,
obj_table=,
mp=0x7fae7fc9ab40)
at 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:492
#3  __mempool_generic_get (cache=, n=,
obj_table=,
mp=)
at 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1271
#4  rte_mempool_generic_get (cache=, n=,
obj_table=, mp=)
at 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1306
#5  rte_mempool_get_bulk (n=528, obj_table=, mp=0x7fae7fc9ab40)
at 
/var/venom/rk-vpp-1804/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_mempool.h:1339
#6  dpdk_buffer_fill_free_list_avx2 (vm=0x7fb08ec69480
, fl=0x7fb04cb2b100,
min_free_buffers=)
at /var/venom/rk-vpp-1804/vpp/build-data/../src/plugins/dpdk/buffer.c:228
#7  0x7fb08e5046ea in vlib_buffer_alloc_from_free_list (index=0
'\000', n_buffers=514,
buffers=0x7fb04cb8ec58, vm=0x7fb08ec69480 )
at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/buffer_funcs.h:306
#8  vhost_user_if_input (mode=, node=0x7fb04d0f5b80,
qid=,
vui=0x7fb04d87523c, vum=0x7fb08e9b9560 ,
vm=0x7fb08ec69480 )
at 
/var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1644
#9  vhost_user_input (f=, node=,
vm=)
at 
/var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1947
#10 vhost_user_input_avx2 (vm=, node=,
frame=)
at 
/var/venom/rk-vpp-1804/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1972
#11 0x7fb08ea166b3 in dispatch_node (last_time_stamp=, frame=0x0,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
node=0x7fb04d0f5b80,
vm=0x7fb08ec69480 )
at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:988
#12 vlib_main_or_worker_loop (is_main=1, vm=0x7fb08ec69480 )
at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:1505
#13 vlib_main_loop (vm=0x7fb08ec69480 )
at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:1633
#14 vlib_main (vm=vm@entry=0x7fb08ec69480 ,
input=input@entry=0x7fb04d077fa0)
at /var/venom/rk-vpp-1804/vpp/build-data/../src/vlib/main.c:1787
#15 0x7fb08ea4d683 in thread0 (arg=140396286350464)
at 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi,

In order to use dpdk virtio_user, you need 1GB huge page.

Steven

On 6/5/18, 11:17 AM, "Ravi Kerur"  wrote:

Hi Steven,

Connection is the problem. I don't see memory regions setup correctly.
Below are some details. Currently I am using 2MB hugepages.

(1) Create vhost-user server
debug vhost-user on
vpp# create vhost socket /var/run/vpp/sock3.sock server
VirtualEthernet0/0/0
vpp# set interface state VirtualEthernet0/0/0 up
vpp#
vpp#

(2) Instantiate a container
docker run -it --privileged -v
/var/run/vpp/sock3.sock:/var/run/usvhost1 -v
/dev/hugepages:/dev/hugepages dpdk-app-vpp:latest

(3) Inside the container run EAL/DPDK virtio with following startup conf.
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
no-pci
vdev virtio_user0,path=/var/run/usvhost1
}

Following errors are seen due to 2MB hugepages and I think DPDK
requires "--single-file-segments" option.

/usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
--vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
/run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
64,64
/usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
--vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
/run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
64,64
EAL: 4 hugepages of size 1073741824 reserved, but no mounted hugetlbfs
found for that size
EAL: VFIO support initialized
get_hugepage_file_info(): Exceed maximum of 8
prepare_vhost_memory_user(): Failed to prepare memory for vhost-user
DPDK physical memory layout:


Second test case>
(1) and (2) are same as above. I run VPP inside a container with
following startup config

unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
no-pci
single-file-segments
vdev virtio_user0,path=/var/run/usvhost1
}


VPP fails to start with
plugin.so
vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'
vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'

[1]+  Done/usr/bin/vpp -c /etc/vpp/startup.conf
root@867dc128b544:~/dpdk#


show version (on both host and container).
vpp v18.04-rc2~26-gac2b736~b45 built by root on 34a554d1c194 at Wed
Apr 25 14:53:07 UTC 2018
vpp#

Thanks.

On Tue, Jun 5, 2018 at 9:23 AM, Steven Luong (sluong)  
wrote:
> Ravi,
>
> Do this
>
> 1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user 
on".
> 2. Bring up the container with the vdev virtio_user commands that you 
have as before
> 3. show vhost-user in the host and verify that it has a shared memory 
region. If not, the connection has a problem. Collect the show vhost-user and 
debug vhost-user and send them to me and stop. If yes, proceed with step 4.
> 4. type "trace vhost-user-input 100" in the host
> 5. clear error, and clear interfaces in the host and the container.
> 6. do the ping from the container.
> 7. Collect show error, show trace, show interface, and show vhost-user in 
the host. Collect show error and show interface in the container. Put output in 
github and provide a link to view. There is no need to send a large file.
>
> Steven
>
> On 6/4/18, 5:50 PM, "Ravi Kerur"  wrote:
>
> Hi Steven,
>
> Thanks for your help. I am using vhost-user client (VPP in container)
> and vhost-user server (VPP in host). I thought it should work.
>
> create vhost socket /var/run/vpp/sock3.sock server (On host)
>
> create vhost socket /var/run/usvhost1 (On container)
>
> Can you please point me to a document which shows how to create VPP
> virtio_user interfaces or static configuration in
> /etc/vpp/startup.conf?
>
> I have used following declarations in /etc/vpp/startup.conf
>
> # vdev virtio_user0,path=/var/run/vpp/sock3.sock,mac=52:54:00:00:04:01
> # vdev virtio_user1,path=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02
>
> but it doesn't work.
>
> Thanks.
>
> On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong) 
 wrote:
> > Ravi,
> >
> > VPP only supports vhost-user in the device mode. In your example, 
the host, in device mode, and the container also in device mode do not make a 
happy couple. You need one of them, either the host or 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread Ravi Kerur
Hi Steven,

Connection is the problem. I don't see memory regions setup correctly.
Below are some details. Currently I am using 2MB hugepages.

(1) Create vhost-user server
debug vhost-user on
vpp# create vhost socket /var/run/vpp/sock3.sock server
VirtualEthernet0/0/0
vpp# set interface state VirtualEthernet0/0/0 up
vpp#
vpp#

(2) Instantiate a container
docker run -it --privileged -v
/var/run/vpp/sock3.sock:/var/run/usvhost1 -v
/dev/hugepages:/dev/hugepages dpdk-app-vpp:latest

(3) Inside the container run EAL/DPDK virtio with following startup conf.
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
no-pci
vdev virtio_user0,path=/var/run/usvhost1
}

Following errors are seen due to 2MB hugepages and I think DPDK
requires "--single-file-segments" option.

/usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
--vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
/run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
64,64
/usr/bin/vpp[19]: dpdk_config:1275: EAL init args: -c 1 -n 4 --no-pci
--vdev virtio_user0,path=/var/run/usvhost1 --huge-dir
/run/vpp/hugepages --file-prefix vpp --master-lcore 0 --socket-mem
64,64
EAL: 4 hugepages of size 1073741824 reserved, but no mounted hugetlbfs
found for that size
EAL: VFIO support initialized
get_hugepage_file_info(): Exceed maximum of 8
prepare_vhost_memory_user(): Failed to prepare memory for vhost-user
DPDK physical memory layout:


Second test case>
(1) and (2) are same as above. I run VPP inside a container with
following startup config

unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

api-trace {
  on
}

api-segment {
  gid vpp
}

dpdk {
no-pci
single-file-segments
vdev virtio_user0,path=/var/run/usvhost1
}


VPP fails to start with
plugin.so
vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'
vpp[19]: dpdk_config: unknown input `single-file-segments no-pci vd...'

[1]+  Done/usr/bin/vpp -c /etc/vpp/startup.conf
root@867dc128b544:~/dpdk#


show version (on both host and container).
vpp v18.04-rc2~26-gac2b736~b45 built by root on 34a554d1c194 at Wed
Apr 25 14:53:07 UTC 2018
vpp#

Thanks.

On Tue, Jun 5, 2018 at 9:23 AM, Steven Luong (sluong)  wrote:
> Ravi,
>
> Do this
>
> 1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user on".
> 2. Bring up the container with the vdev virtio_user commands that you have as 
> before
> 3. show vhost-user in the host and verify that it has a shared memory region. 
> If not, the connection has a problem. Collect the show vhost-user and debug 
> vhost-user and send them to me and stop. If yes, proceed with step 4.
> 4. type "trace vhost-user-input 100" in the host
> 5. clear error, and clear interfaces in the host and the container.
> 6. do the ping from the container.
> 7. Collect show error, show trace, show interface, and show vhost-user in the 
> host. Collect show error and show interface in the container. Put output in 
> github and provide a link to view. There is no need to send a large file.
>
> Steven
>
> On 6/4/18, 5:50 PM, "Ravi Kerur"  wrote:
>
> Hi Steven,
>
> Thanks for your help. I am using vhost-user client (VPP in container)
> and vhost-user server (VPP in host). I thought it should work.
>
> create vhost socket /var/run/vpp/sock3.sock server (On host)
>
> create vhost socket /var/run/usvhost1 (On container)
>
> Can you please point me to a document which shows how to create VPP
> virtio_user interfaces or static configuration in
> /etc/vpp/startup.conf?
>
> I have used following declarations in /etc/vpp/startup.conf
>
> # vdev virtio_user0,path=/var/run/vpp/sock3.sock,mac=52:54:00:00:04:01
> # vdev virtio_user1,path=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02
>
> but it doesn't work.
>
> Thanks.
>
> On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong)  
> wrote:
> > Ravi,
> >
> > VPP only supports vhost-user in the device mode. In your example, the 
> host, in device mode, and the container also in device mode do not make a 
> happy couple. You need one of them, either the host or container, running in 
> driver mode using the dpdk vdev virtio_user command in startup.conf. So you 
> need something like this
> >
> > (host) VPP native vhost-user - (container) VPP DPDK vdev virtio_user
> >   -- or --
> > (host) VPP DPDK vdev virtio_user  (container) VPP native vhost-user
> >
> > Steven
> >
> > On 6/4/18, 3:27 PM, "Ravi Kerur"  wrote:
> >
> > Hi Steven
> >
> > Though crash is not happening anymore, there is still an issue with 
> Rx
> > and Tx. To eliminate whether it is testpmd or vpp, I decided to run
> >
> > (1) VPP vhost-user server on host-x
> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-05 Thread steven luong via Lists.Fd.Io
Ravi,

Do this

1. Run VPP native vhost-user in the host. Turn on debug "debug vhost-user on". 
2. Bring up the container with the vdev virtio_user commands that you have as 
before
3. show vhost-user in the host and verify that it has a shared memory region. 
If not, the connection has a problem. Collect the show vhost-user and debug 
vhost-user and send them to me and stop. If yes, proceed with step 4.
4. type "trace vhost-user-input 100" in the host
5. clear error, and clear interfaces in the host and the container.
6. do the ping from the container.
7. Collect show error, show trace, show interface, and show vhost-user in the 
host. Collect show error and show interface in the container. Put output in 
github and provide a link to view. There is no need to send a large file.

Steven

On 6/4/18, 5:50 PM, "Ravi Kerur"  wrote:

Hi Steven,

Thanks for your help. I am using vhost-user client (VPP in container)
and vhost-user server (VPP in host). I thought it should work.

create vhost socket /var/run/vpp/sock3.sock server (On host)

create vhost socket /var/run/usvhost1 (On container)

Can you please point me to a document which shows how to create VPP
virtio_user interfaces or static configuration in
/etc/vpp/startup.conf?

I have used following declarations in /etc/vpp/startup.conf

# vdev virtio_user0,path=/var/run/vpp/sock3.sock,mac=52:54:00:00:04:01
# vdev virtio_user1,path=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02

but it doesn't work.

Thanks.

On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong)  
wrote:
> Ravi,
>
> VPP only supports vhost-user in the device mode. In your example, the 
host, in device mode, and the container also in device mode do not make a happy 
couple. You need one of them, either the host or container, running in driver 
mode using the dpdk vdev virtio_user command in startup.conf. So you need 
something like this
>
> (host) VPP native vhost-user - (container) VPP DPDK vdev virtio_user
>   -- or --
> (host) VPP DPDK vdev virtio_user  (container) VPP native vhost-user
>
> Steven
>
> On 6/4/18, 3:27 PM, "Ravi Kerur"  wrote:
>
> Hi Steven
>
> Though crash is not happening anymore, there is still an issue with Rx
> and Tx. To eliminate whether it is testpmd or vpp, I decided to run
>
> (1) VPP vhost-user server on host-x
> (2) Run VPP in a container on host-x and vhost-user client port
> connecting to vhost-user server.
>
> Still doesn't work. Details below. Please let me know if something is
> wrong in what I am doing.
>
>
> (1) VPP vhost-user as a server
> (2) VPP in a container virtio-user or vhost-user client
>
> (1) Create vhost-user server socket on VPP running on host.
>
> vpp#create vhost socket /var/run/vpp/sock3.sock server
> vpp#set interface state VirtualEthernet0/0/0 up
> show vhost-user VirtualEthernet0/0/0 descriptors
> Virtio vhost-user interfaces
> Global:
> coalesce frames 32 time 1e-3
> number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 3)
> virtio_net_hdr_sz 0
> features mask (0x):
> features (0x0):
> protocol features (0x0)
>
> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>
> rx placement:
> tx placement: spin-lock
> thread 0 on vring 0
>
> Memory regions (total 0)
>
> vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
> vpp#
>
> (2) Instantiate a docker container to run VPP connecting to 
sock3.server socket.
>
> docker run -it --privileged -v
> /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
> /dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
> root@4b1bd06a3225:~/dpdk#
> root@4b1bd06a3225:~/dpdk# ps -ef
> UID PID PPID C STIME TTY TIME CMD
> root 1 0 0 21:39 ? 00:00:00 /bin/bash
> root 17 1 0 21:39 ? 00:00:00 ps -ef
> root@4b1bd06a3225:~/dpdk#
>
> root@8efda6701ace:~/dpdk# ps -ef | grep vpp
> root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
> root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
> root@8efda6701ace:~/dpdk#
>
> vpp#create vhost socket /var/run/usvhost1
> vpp#set interface state VirtualEthernet0/0/0 up
> vpp#show vhost-user VirtualEthernet0/0/0 descriptors
> Virtio vhost-user interfaces
> Global:
> coalesce frames 32 time 1e-3
> number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 1)
> virtio_net_hdr_sz 0
> features mask (0x):
>  

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-04 Thread Ravi Kerur
Hi Steven,

Thanks for your help. I am using vhost-user client (VPP in container)
and vhost-user server (VPP in host). I thought it should work.

create vhost socket /var/run/vpp/sock3.sock server (On host)

create vhost socket /var/run/usvhost1 (On container)

Can you please point me to a document which shows how to create VPP
virtio_user interfaces or static configuration in
/etc/vpp/startup.conf?

I have used following declarations in /etc/vpp/startup.conf

# vdev virtio_user0,path=/var/run/vpp/sock3.sock,mac=52:54:00:00:04:01
# vdev virtio_user1,path=/var/run/vpp/sock4.sock,mac=52:54:00:00:04:02

but it doesn't work.

Thanks.

On Mon, Jun 4, 2018 at 3:57 PM, Steven Luong (sluong)  wrote:
> Ravi,
>
> VPP only supports vhost-user in the device mode. In your example, the host, 
> in device mode, and the container also in device mode do not make a happy 
> couple. You need one of them, either the host or container, running in driver 
> mode using the dpdk vdev virtio_user command in startup.conf. So you need 
> something like this
>
> (host) VPP native vhost-user - (container) VPP DPDK vdev virtio_user
>   -- or --
> (host) VPP DPDK vdev virtio_user  (container) VPP native vhost-user
>
> Steven
>
> On 6/4/18, 3:27 PM, "Ravi Kerur"  wrote:
>
> Hi Steven
>
> Though crash is not happening anymore, there is still an issue with Rx
> and Tx. To eliminate whether it is testpmd or vpp, I decided to run
>
> (1) VPP vhost-user server on host-x
> (2) Run VPP in a container on host-x and vhost-user client port
> connecting to vhost-user server.
>
> Still doesn't work. Details below. Please let me know if something is
> wrong in what I am doing.
>
>
> (1) VPP vhost-user as a server
> (2) VPP in a container virtio-user or vhost-user client
>
> (1) Create vhost-user server socket on VPP running on host.
>
> vpp#create vhost socket /var/run/vpp/sock3.sock server
> vpp#set interface state VirtualEthernet0/0/0 up
> show vhost-user VirtualEthernet0/0/0 descriptors
> Virtio vhost-user interfaces
> Global:
> coalesce frames 32 time 1e-3
> number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 3)
> virtio_net_hdr_sz 0
> features mask (0x):
> features (0x0):
> protocol features (0x0)
>
> socket filename /var/run/vpp/sock3.sock type server errno "Success"
>
> rx placement:
> tx placement: spin-lock
> thread 0 on vring 0
>
> Memory regions (total 0)
>
> vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
> vpp#
>
> (2) Instantiate a docker container to run VPP connecting to sock3.server 
> socket.
>
> docker run -it --privileged -v
> /var/run/vpp/sock3.sock:/var/run/usvhost1 -v
> /dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
> root@4b1bd06a3225:~/dpdk#
> root@4b1bd06a3225:~/dpdk# ps -ef
> UID PID PPID C STIME TTY TIME CMD
> root 1 0 0 21:39 ? 00:00:00 /bin/bash
> root 17 1 0 21:39 ? 00:00:00 ps -ef
> root@4b1bd06a3225:~/dpdk#
>
> root@8efda6701ace:~/dpdk# ps -ef | grep vpp
> root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
> root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
> root@8efda6701ace:~/dpdk#
>
> vpp#create vhost socket /var/run/usvhost1
> vpp#set interface state VirtualEthernet0/0/0 up
> vpp#show vhost-user VirtualEthernet0/0/0 descriptors
> Virtio vhost-user interfaces
> Global:
> coalesce frames 32 time 1e-3
> number of rx virtqueues in interrupt mode: 0
> Interface: VirtualEthernet0/0/0 (ifindex 1)
> virtio_net_hdr_sz 0
> features mask (0x):
> features (0x0):
> protocol features (0x0)
>
> socket filename /var/run/usvhost1 type client errno "Success"
>
> rx placement:
> tx placement: spin-lock
> thread 0 on vring 0
>
> Memory regions (total 0)
>
> vpp#
>
> vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.2/24
> vpp#
>
> vpp# ping 192.168.1.1
>
> Statistics: 5 sent, 0 received, 100% packet loss
> vpp#
>
> On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong)  
> wrote:
> > show interface and look for the counter and count columns for the 
> corresponding interface.
> >
> > Steven
> >
> > On 5/31/18, 1:28 PM, "Ravi Kerur"  wrote:
> >
> > Hi Steven,
> >
> > You made my day, thank you. I didn't realize different dpdk versions
> > (vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
> > could be the cause of the problem, I still dont understand why it
> > should as virtio/vhost messages are meant to setup tx/rx rings
> > correctly?
> >
> > I downloaded dpdk 18.02.1 stable release and at least vpp doesn't
> > crash now (for both vpp-native and dpdk vhost interfaces). I have 
> one
> > question 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-04 Thread steven luong via Lists.Fd.Io
Ravi,

VPP only supports vhost-user in the device mode. In your example, the host, in 
device mode, and the container also in device mode do not make a happy couple. 
You need one of them, either the host or container, running in driver mode 
using the dpdk vdev virtio_user command in startup.conf. So you need something 
like this

(host) VPP native vhost-user - (container) VPP DPDK vdev virtio_user
  -- or --
(host) VPP DPDK vdev virtio_user  (container) VPP native vhost-user

Steven

On 6/4/18, 3:27 PM, "Ravi Kerur"  wrote:

Hi Steven

Though crash is not happening anymore, there is still an issue with Rx
and Tx. To eliminate whether it is testpmd or vpp, I decided to run

(1) VPP vhost-user server on host-x
(2) Run VPP in a container on host-x and vhost-user client port
connecting to vhost-user server.

Still doesn't work. Details below. Please let me know if something is
wrong in what I am doing.


(1) VPP vhost-user as a server
(2) VPP in a container virtio-user or vhost-user client

(1) Create vhost-user server socket on VPP running on host.

vpp#create vhost socket /var/run/vpp/sock3.sock server
vpp#set interface state VirtualEthernet0/0/0 up
show vhost-user VirtualEthernet0/0/0 descriptors
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 0
features mask (0x):
features (0x0):
protocol features (0x0)

socket filename /var/run/vpp/sock3.sock type server errno "Success"

rx placement:
tx placement: spin-lock
thread 0 on vring 0

Memory regions (total 0)

vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
vpp#

(2) Instantiate a docker container to run VPP connecting to sock3.server 
socket.

docker run -it --privileged -v
/var/run/vpp/sock3.sock:/var/run/usvhost1 -v
/dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
root@4b1bd06a3225:~/dpdk#
root@4b1bd06a3225:~/dpdk# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 21:39 ? 00:00:00 /bin/bash
root 17 1 0 21:39 ? 00:00:00 ps -ef
root@4b1bd06a3225:~/dpdk#

root@8efda6701ace:~/dpdk# ps -ef | grep vpp
root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
root@8efda6701ace:~/dpdk#

vpp#create vhost socket /var/run/usvhost1
vpp#set interface state VirtualEthernet0/0/0 up
vpp#show vhost-user VirtualEthernet0/0/0 descriptors
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 0
features mask (0x):
features (0x0):
protocol features (0x0)

socket filename /var/run/usvhost1 type client errno "Success"

rx placement:
tx placement: spin-lock
thread 0 on vring 0

Memory regions (total 0)

vpp#

vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.2/24
vpp#

vpp# ping 192.168.1.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp#

On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong)  
wrote:
> show interface and look for the counter and count columns for the 
corresponding interface.
>
> Steven
>
> On 5/31/18, 1:28 PM, "Ravi Kerur"  wrote:
>
> Hi Steven,
>
> You made my day, thank you. I didn't realize different dpdk versions
> (vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
> could be the cause of the problem, I still dont understand why it
> should as virtio/vhost messages are meant to setup tx/rx rings
> correctly?
>
> I downloaded dpdk 18.02.1 stable release and at least vpp doesn't
> crash now (for both vpp-native and dpdk vhost interfaces). I have one
> question is there a way to read vhost-user statistics counter (Rx/Tx)
> on vpp? I only know
>
> 'show vhost-user ' and 'show vhost-user  descriptors'
> which doesn't show any counters.
>
> Thanks.
>
> On Thu, May 31, 2018 at 11:51 AM, Steven Luong (sluong)
>  wrote:
> > Ravi,
> >
> > For (1) which works, what dpdk version are you using in the host? 
Are you using the same dpdk version as VPP is using? Since you are using VPP 
latest, I think it is 18.02. Type "show dpdk version" at the VPP prompt to find 
out for sure.
> >
> > Steven
> >
> > On 5/31/18, 11:44 AM, "Ravi Kerur"  wrote:
> >
> > Hi Steven,
> >
> > i have tested following scenarios and it basically is not clear 
why

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-06-04 Thread Ravi Kerur
Hi Steven

Though crash is not happening anymore, there is still an issue with Rx
and Tx. To eliminate whether it is testpmd or vpp, I decided to run

(1) VPP vhost-user server on host-x
(2) Run VPP in a container on host-x and vhost-user client port
connecting to vhost-user server.

Still doesn't work. Details below. Please let me know if something is
wrong in what I am doing.


(1) VPP vhost-user as a server
(2) VPP in a container virtio-user or vhost-user client

(1) Create vhost-user server socket on VPP running on host.

vpp#create vhost socket /var/run/vpp/sock3.sock server
vpp#set interface state VirtualEthernet0/0/0 up
show vhost-user VirtualEthernet0/0/0 descriptors
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 0
features mask (0x):
features (0x0):
protocol features (0x0)

socket filename /var/run/vpp/sock3.sock type server errno "Success"

rx placement:
tx placement: spin-lock
thread 0 on vring 0

Memory regions (total 0)

vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.1/24
vpp#

(2) Instantiate a docker container to run VPP connecting to sock3.server socket.

docker run -it --privileged -v
/var/run/vpp/sock3.sock:/var/run/usvhost1 -v
/dev/hugepages:/dev/hugepages dpdk-app-vpp:latest
root@4b1bd06a3225:~/dpdk#
root@4b1bd06a3225:~/dpdk# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 21:39 ? 00:00:00 /bin/bash
root 17 1 0 21:39 ? 00:00:00 ps -ef
root@4b1bd06a3225:~/dpdk#

root@8efda6701ace:~/dpdk# ps -ef | grep vpp
root 19 1 39 21:41 ? 00:00:03 /usr/bin/vpp -c /etc/vpp/startup.conf
root 25 1 0 21:41 ? 00:00:00 grep --color=auto vpp
root@8efda6701ace:~/dpdk#

vpp#create vhost socket /var/run/usvhost1
vpp#set interface state VirtualEthernet0/0/0 up
vpp#show vhost-user VirtualEthernet0/0/0 descriptors
Virtio vhost-user interfaces
Global:
coalesce frames 32 time 1e-3
number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 1)
virtio_net_hdr_sz 0
features mask (0x):
features (0x0):
protocol features (0x0)

socket filename /var/run/usvhost1 type client errno "Success"

rx placement:
tx placement: spin-lock
thread 0 on vring 0

Memory regions (total 0)

vpp#

vpp# set interface ip address VirtualEthernet0/0/0 192.168.1.2/24
vpp#

vpp# ping 192.168.1.1

Statistics: 5 sent, 0 received, 100% packet loss
vpp#

On Thu, May 31, 2018 at 2:30 PM, Steven Luong (sluong)  wrote:
> show interface and look for the counter and count columns for the 
> corresponding interface.
>
> Steven
>
> On 5/31/18, 1:28 PM, "Ravi Kerur"  wrote:
>
> Hi Steven,
>
> You made my day, thank you. I didn't realize different dpdk versions
> (vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
> could be the cause of the problem, I still dont understand why it
> should as virtio/vhost messages are meant to setup tx/rx rings
> correctly?
>
> I downloaded dpdk 18.02.1 stable release and at least vpp doesn't
> crash now (for both vpp-native and dpdk vhost interfaces). I have one
> question is there a way to read vhost-user statistics counter (Rx/Tx)
> on vpp? I only know
>
> 'show vhost-user ' and 'show vhost-user  descriptors'
> which doesn't show any counters.
>
> Thanks.
>
> On Thu, May 31, 2018 at 11:51 AM, Steven Luong (sluong)
>  wrote:
> > Ravi,
> >
> > For (1) which works, what dpdk version are you using in the host? Are 
> you using the same dpdk version as VPP is using? Since you are using VPP 
> latest, I think it is 18.02. Type "show dpdk version" at the VPP prompt to 
> find out for sure.
> >
> > Steven
> >
> > On 5/31/18, 11:44 AM, "Ravi Kerur"  wrote:
> >
> > Hi Steven,
> >
> > i have tested following scenarios and it basically is not clear why
> > you think DPDK is the problem? Is it possible VPP and DPDK use
> > different virtio versions?
> >
> > Following are the scenarios I have tested
> >
> > (1) testpmd/DPDK vhost-user (running on host) and testpmd/DPDK
> > virito-user (in a container) -- can send and receive packets
> > (2) VPP-native vhost-user (running on host) and testpmd/DPDK
> > virtio-user (in a container) -- VPP crashes and it is in VPP code
> > (3) VPP-DPDK vhost user (running on host) and testpmd/DPDK 
> virtio-user
> > (in a container) -- VPP crashes and in DPDK
> >
> > Thanks.
> >
> > On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong)
> >  wrote:
> > > Ravi,
> > >
> > > I've proved my point -- there is a problem in the way that you 
> invoke testpmd. The shared memory region that it passes to the device is not 
> accessible from the device. I don't know what the correct options are that 
> you need to use. This is really a question for dpdk.
> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
show interface and look for the counter and count columns for the corresponding 
interface.

Steven

On 5/31/18, 1:28 PM, "Ravi Kerur"  wrote:

Hi Steven,

You made my day, thank you. I didn't realize different dpdk versions
(vpp -- 18.02.1 and testpmd -- from latest git repo (probably 18.05)
could be the cause of the problem, I still dont understand why it
should as virtio/vhost messages are meant to setup tx/rx rings
correctly?

I downloaded dpdk 18.02.1 stable release and at least vpp doesn't
crash now (for both vpp-native and dpdk vhost interfaces). I have one
question is there a way to read vhost-user statistics counter (Rx/Tx)
on vpp? I only know

'show vhost-user ' and 'show vhost-user  descriptors'
which doesn't show any counters.

Thanks.

On Thu, May 31, 2018 at 11:51 AM, Steven Luong (sluong)
 wrote:
> Ravi,
>
> For (1) which works, what dpdk version are you using in the host? Are you 
using the same dpdk version as VPP is using? Since you are using VPP latest, I 
think it is 18.02. Type "show dpdk version" at the VPP prompt to find out for 
sure.
>
> Steven
>
> On 5/31/18, 11:44 AM, "Ravi Kerur"  wrote:
>
> Hi Steven,
>
> i have tested following scenarios and it basically is not clear why
> you think DPDK is the problem? Is it possible VPP and DPDK use
> different virtio versions?
>
> Following are the scenarios I have tested
>
> (1) testpmd/DPDK vhost-user (running on host) and testpmd/DPDK
> virito-user (in a container) -- can send and receive packets
> (2) VPP-native vhost-user (running on host) and testpmd/DPDK
> virtio-user (in a container) -- VPP crashes and it is in VPP code
> (3) VPP-DPDK vhost user (running on host) and testpmd/DPDK virtio-user
> (in a container) -- VPP crashes and in DPDK
>
> Thanks.
>
> On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong)
>  wrote:
> > Ravi,
> >
> > I've proved my point -- there is a problem in the way that you 
invoke testpmd. The shared memory region that it passes to the device is not 
accessible from the device. I don't know what the correct options are that you 
need to use. This is really a question for dpdk.
> >
> > As a further exercise, you could remove VPP in the host and instead 
run testpmd in device mode using "--vdev 
net_vhost0,iface=/var/run/vpp/sock1.sock" option. I bet you testpmd in the host 
will crash in the same place. I hope you can find out the answer from dpdk and 
tell us about it.
> >
> > Steven
> >
> > On 5/31/18, 9:31 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
 wrote:
> >
> > Hi Steven,
> >
> > Thank you for your help, I removed sock1.sock and sock2.sock,
> > restarted vpp, atleast interfaces get created. However, when I 
start
> > dpdk/testpmd inside the container it crashes as well. Below are 
some
> > details. I am using vpp code from latest repo.
> >
> > (1) On host
> > show interface
> >   Name   Idx   State  
Counter
> > Count
> > VhostEthernet23down
> > VhostEthernet34down
> > VirtualFunctionEthernet4/10/4 1down
> > VirtualFunctionEthernet4/10/6 2down
> > local00down
> > vpp#
> > vpp# set interface state VhostEthernet2 up
> > vpp# set interface state VhostEthernet3 up
> > vpp#
> > vpp# set interface l2 bridge VhostEthernet2 1
> > vpp# set interface l2 bridge VhostEthernet3 1
> > vpp#
> >
> > (2) Run tespmd inside the container
> > docker run -it --privileged -v
> > /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
> > /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
> > /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -l 
16-19
> > -n 4 --log-level=8 -m 64 --no-pci
> > --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
> > 
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
> > -i
> > EAL: Detected 28 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: 8192 hugepages of size 2097152 reserved, but no mounted 
hugetlbfs
> > found for that size
> > EAL: Probing VFIO support...
> > EAL: VFIO support initialized
> > EAL: Setting up physically contiguous memory...
> > 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread Ravi Kerur
Hi Steven,

i have tested following scenarios and it basically is not clear why
you think DPDK is the problem? Is it possible VPP and DPDK use
different virtio versions?

Following are the scenarios I have tested

(1) testpmd/DPDK vhost-user (running on host) and testpmd/DPDK
virito-user (in a container) -- can send and receive packets
(2) VPP-native vhost-user (running on host) and testpmd/DPDK
virtio-user (in a container) -- VPP crashes and it is in VPP code
(3) VPP-DPDK vhost user (running on host) and testpmd/DPDK virtio-user
(in a container) -- VPP crashes and in DPDK

Thanks.

On Thu, May 31, 2018 at 10:12 AM, Steven Luong (sluong)
 wrote:
> Ravi,
>
> I've proved my point -- there is a problem in the way that you invoke 
> testpmd. The shared memory region that it passes to the device is not 
> accessible from the device. I don't know what the correct options are that 
> you need to use. This is really a question for dpdk.
>
> As a further exercise, you could remove VPP in the host and instead run 
> testpmd in device mode using "--vdev 
> net_vhost0,iface=/var/run/vpp/sock1.sock" option. I bet you testpmd in the 
> host will crash in the same place. I hope you can find out the answer from 
> dpdk and tell us about it.
>
> Steven
>
> On 5/31/18, 9:31 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
>  wrote:
>
> Hi Steven,
>
> Thank you for your help, I removed sock1.sock and sock2.sock,
> restarted vpp, atleast interfaces get created. However, when I start
> dpdk/testpmd inside the container it crashes as well. Below are some
> details. I am using vpp code from latest repo.
>
> (1) On host
> show interface
>   Name   Idx   State  Counter
> Count
> VhostEthernet23down
> VhostEthernet34down
> VirtualFunctionEthernet4/10/4 1down
> VirtualFunctionEthernet4/10/6 2down
> local00down
> vpp#
> vpp# set interface state VhostEthernet2 up
> vpp# set interface state VhostEthernet3 up
> vpp#
> vpp# set interface l2 bridge VhostEthernet2 1
> vpp# set interface l2 bridge VhostEthernet3 1
> vpp#
>
> (2) Run tespmd inside the container
> docker run -it --privileged -v
> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -l 16-19
> -n 4 --log-level=8 -m 64 --no-pci
> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
> -i
> EAL: Detected 28 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: 8192 hugepages of size 2097152 reserved, but no mounted hugetlbfs
> found for that size
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Setting up physically contiguous memory...
> EAL: locking hot plug lock memory...
> EAL: primary init32...
> Interactive-mode selected
> Warning: NUMA should be configured manually by using
> --port-numa-config and --ring-numa-config parameters along with
> --numa.
> testpmd: create a new mbuf pool : n=171456,
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool : n=171456,
> size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Port 0 is now not stopped
> Port 1 is now not stopped
> Please stop the ports first
> Done
> testpmd>
>
> (3) VPP crashes with the same issue but inside dpdk code
>
> (gdb) cont
> Continuing.
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7ffd0d08e700 (LWP 41257)]
> rte_vhost_dequeue_burst (vid=, queue_id= out>, mbuf_pool=0x7fe17fc883c0,
> pkts=pkts@entry=0x7fffb671ebc0, count=count@entry=32)
> at 
> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
> 1504free_entries = *((volatile uint16_t *)>avail->idx) -
> (gdb) bt
> #0  rte_vhost_dequeue_burst (vid=, queue_id= out>,
> mbuf_pool=0x7fe17fc883c0, pkts=pkts@entry=0x7fffb671ebc0,
> count=count@entry=32)
> at 
> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
> #1  0x7fffb4718e6f in eth_vhost_rx (q=0x7fe17fbbdd80, 
> bufs=0x7fffb671ebc0,
> nb_bufs=)
> at 
> /var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/net/vhost/rte_eth_vhost.c:410
> #2  0x7fffb441cb7c in rte_eth_rx_burst (nb_pkts=256,
> rx_pkts=0x7fffb671ebc0, queue_id=0,
> port_id=3) at
> 
> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread steven luong
Ravi,

I've proved my point -- there is a problem in the way that you invoke testpmd. 
The shared memory region that it passes to the device is not accessible from 
the device. I don't know what the correct options are that you need to use. 
This is really a question for dpdk.

As a further exercise, you could remove VPP in the host and instead run testpmd 
in device mode using "--vdev net_vhost0,iface=/var/run/vpp/sock1.sock" option. 
I bet you testpmd in the host will crash in the same place. I hope you can find 
out the answer from dpdk and tell us about it.

Steven

On 5/31/18, 9:31 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
 wrote:

Hi Steven,

Thank you for your help, I removed sock1.sock and sock2.sock,
restarted vpp, atleast interfaces get created. However, when I start
dpdk/testpmd inside the container it crashes as well. Below are some
details. I am using vpp code from latest repo.

(1) On host
show interface
  Name   Idx   State  Counter
Count
VhostEthernet23down
VhostEthernet34down
VirtualFunctionEthernet4/10/4 1down
VirtualFunctionEthernet4/10/6 2down
local00down
vpp#
vpp# set interface state VhostEthernet2 up
vpp# set interface state VhostEthernet3 up
vpp#
vpp# set interface l2 bridge VhostEthernet2 1
vpp# set interface l2 bridge VhostEthernet3 1
vpp#

(2) Run tespmd inside the container
docker run -it --privileged -v
/var/run/vpp/sock1.sock:/var/run/usvhost1 -v
/var/run/vpp/sock2.sock:/var/run/usvhost2 -v
/dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -l 16-19
-n 4 --log-level=8 -m 64 --no-pci
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
-i
EAL: Detected 28 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: 8192 hugepages of size 2097152 reserved, but no mounted hugetlbfs
found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Setting up physically contiguous memory...
EAL: locking hot plug lock memory...
EAL: primary init32...
Interactive-mode selected
Warning: NUMA should be configured manually by using
--port-numa-config and --ring-numa-config parameters along with
--numa.
testpmd: create a new mbuf pool : n=171456,
size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool : n=171456,
size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Port 0 is now not stopped
Port 1 is now not stopped
Please stop the ports first
Done
testpmd>

(3) VPP crashes with the same issue but inside dpdk code

(gdb) cont
Continuing.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffd0d08e700 (LWP 41257)]
rte_vhost_dequeue_burst (vid=, queue_id=, mbuf_pool=0x7fe17fc883c0,
pkts=pkts@entry=0x7fffb671ebc0, count=count@entry=32)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
1504free_entries = *((volatile uint16_t *)>avail->idx) -
(gdb) bt
#0  rte_vhost_dequeue_burst (vid=, queue_id=,
mbuf_pool=0x7fe17fc883c0, pkts=pkts@entry=0x7fffb671ebc0,
count=count@entry=32)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
#1  0x7fffb4718e6f in eth_vhost_rx (q=0x7fe17fbbdd80, 
bufs=0x7fffb671ebc0,
nb_bufs=)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/net/vhost/rte_eth_vhost.c:410
#2  0x7fffb441cb7c in rte_eth_rx_burst (nb_pkts=256,
rx_pkts=0x7fffb671ebc0, queue_id=0,
port_id=3) at

/var/venom/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_ethdev.h:3635
#3  dpdk_device_input (queue_id=0, thread_index=,
node=0x7fffb732c700,
xd=0x7fffb7337240, dm=, vm=0x7fffb6703340)
at /var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:477
#4  dpdk_input_node_fn_avx2 (vm=, node=,
f=)
at /var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:658
#5  0x77954d35 in dispatch_node
(last_time_stamp=12531752723928016, frame=0x0,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
node=0x7fffb732c700,
vm=0x7fffb6703340) at /var/venom/vpp/build-data/../src/vlib/main.c:988
#6  vlib_main_or_worker_loop (is_main=0, vm=0x7fffb6703340)
at /var/venom/vpp/build-data/../src/vlib/main.c:1507
#7  vlib_worker_loop (vm=0x7fffb6703340) at
/var/venom/vpp/build-data/../src/vlib/main.c:1641
#8  

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-31 Thread Ravi Kerur
Hi Steven,

Thank you for your help, I removed sock1.sock and sock2.sock,
restarted vpp, atleast interfaces get created. However, when I start
dpdk/testpmd inside the container it crashes as well. Below are some
details. I am using vpp code from latest repo.

(1) On host
show interface
  Name   Idx   State  Counter
Count
VhostEthernet23down
VhostEthernet34down
VirtualFunctionEthernet4/10/4 1down
VirtualFunctionEthernet4/10/6 2down
local00down
vpp#
vpp# set interface state VhostEthernet2 up
vpp# set interface state VhostEthernet3 up
vpp#
vpp# set interface l2 bridge VhostEthernet2 1
vpp# set interface l2 bridge VhostEthernet3 1
vpp#

(2) Run tespmd inside the container
docker run -it --privileged -v
/var/run/vpp/sock1.sock:/var/run/usvhost1 -v
/var/run/vpp/sock2.sock:/var/run/usvhost2 -v
/dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -l 16-19
-n 4 --log-level=8 -m 64 --no-pci
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
-i
EAL: Detected 28 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: 8192 hugepages of size 2097152 reserved, but no mounted hugetlbfs
found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Setting up physically contiguous memory...
EAL: locking hot plug lock memory...
EAL: primary init32...
Interactive-mode selected
Warning: NUMA should be configured manually by using
--port-numa-config and --ring-numa-config parameters along with
--numa.
testpmd: create a new mbuf pool : n=171456,
size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool : n=171456,
size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Port 0 is now not stopped
Port 1 is now not stopped
Please stop the ports first
Done
testpmd>

(3) VPP crashes with the same issue but inside dpdk code

(gdb) cont
Continuing.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffd0d08e700 (LWP 41257)]
rte_vhost_dequeue_burst (vid=, queue_id=, mbuf_pool=0x7fe17fc883c0,
pkts=pkts@entry=0x7fffb671ebc0, count=count@entry=32)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
1504free_entries = *((volatile uint16_t *)>avail->idx) -
(gdb) bt
#0  rte_vhost_dequeue_burst (vid=, queue_id=,
mbuf_pool=0x7fe17fc883c0, pkts=pkts@entry=0x7fffb671ebc0,
count=count@entry=32)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
#1  0x7fffb4718e6f in eth_vhost_rx (q=0x7fe17fbbdd80, bufs=0x7fffb671ebc0,
nb_bufs=)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/drivers/net/vhost/rte_eth_vhost.c:410
#2  0x7fffb441cb7c in rte_eth_rx_burst (nb_pkts=256,
rx_pkts=0x7fffb671ebc0, queue_id=0,
port_id=3) at
/var/venom/vpp/build-root/install-vpp-native/dpdk/include/dpdk/rte_ethdev.h:3635
#3  dpdk_device_input (queue_id=0, thread_index=,
node=0x7fffb732c700,
xd=0x7fffb7337240, dm=, vm=0x7fffb6703340)
at /var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:477
#4  dpdk_input_node_fn_avx2 (vm=, node=,
f=)
at /var/venom/vpp/build-data/../src/plugins/dpdk/device/node.c:658
#5  0x77954d35 in dispatch_node
(last_time_stamp=12531752723928016, frame=0x0,
dispatch_state=VLIB_NODE_STATE_POLLING, type=VLIB_NODE_TYPE_INPUT,
node=0x7fffb732c700,
vm=0x7fffb6703340) at /var/venom/vpp/build-data/../src/vlib/main.c:988
#6  vlib_main_or_worker_loop (is_main=0, vm=0x7fffb6703340)
at /var/venom/vpp/build-data/../src/vlib/main.c:1507
#7  vlib_worker_loop (vm=0x7fffb6703340) at
/var/venom/vpp/build-data/../src/vlib/main.c:1641
#8  0x76ad25d8 in clib_calljmp ()
at /var/venom/vpp/build-data/../src/vppinfra/longjmp.S:110
#9  0x7ffd0d08ddb0 in ?? ()
#10 0x7fffb4436edd in eal_thread_loop (arg=)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_eal/linuxapp/eal/eal_thread.c:153
#11 0x in ?? ()
(gdb) frame 0
#0  rte_vhost_dequeue_burst (vid=, queue_id=,
mbuf_pool=0x7fe17fc883c0, pkts=pkts@entry=0x7fffb671ebc0,
count=count@entry=32)
at 
/var/venom/vpp/build-root/build-vpp-native/dpdk/dpdk-stable-18.02.1/lib/librte_vhost/virtio_net.c:1504
1504free_entries = *((volatile uint16_t *)>avail->idx) -
(gdb) p vq
$1 = (struct vhost_virtqueue *) 0x7fc3ffc84b00
(gdb) p vq->avail
$2 = (struct vring_avail *) 0x7ffbfff98000
(gdb) p *$2
Cannot access memory at address 0x7ffbfff98000
(gdb)


Thanks.

On Thu, May 31, 2018 at 12:09 AM, Steven Luong (sluong)
 wrote:
> Sorry, I was expecting to see two VhostEthernet interfaces like this. Those 
> VirtualFunctionEthernet are your physical interfaces.
>
> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread Ravi Kerur
Hi Steven,

I am testing both memif and vhost-virtio, unfortunately memif is not
working as well. I posted question to the list, let me know if
something is wrong. Below is the link

https://lists.fd.io/g/vpp-dev/topic/q_on_memif_between_vpp/20371922?p=,,,20,0,0,0::recentpostdate%2Fsticky,,,20,2,0,20371922

Thanks.

On Wed, May 30, 2018 at 4:41 PM, Ravi Kerur  wrote:
> Hi Steve,
>
> Thank you for your inputs, I added feature-mask to see if it helps in
> setting up queues correctly, it didn't so I will remove it. I have
> tried following combination
>
> (1) VPP->vhost-user (on host) and DPDK/testpmd->virtio-user (in a
> container)  -- VPP crashes
> (2) DPDK/testpmd->vhost-user (on host) and DPDK/testpmd->virtio-user
> (in a container) -- works fine
>
> To use DPDK vhost-user inside VPP, I defined configuration in
> startup.conf as mentioned by you and it looks as follows
>
> unix {
>   nodaemon
>   log /var/log/vpp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   gid vpp
> }
>
> api-segment {
>   gid vpp
> }
>
> cpu {
> main-core 1
> corelist-workers 6-9
> }
>
> dpdk {
> dev :04:10.4
> dev :04:10.6
> uio-driver vfio-pci
> vdev net_vhost0,iface=/var/run/vpp/sock1.sock
> vdev net_vhost1,iface=/var/run/vpp/sock2.sock
> huge-dir /dev/hugepages_1GB
> socket-mem 2048,2048
> }
>
> From VPP logs
> dpdk: EAL init args: -c 3c2 -n 4 --vdev
> net_vhost0,iface=/var/run/vpp/sock1.sock --vdev
> net_vhost1,iface=/var/run/vpp/sock2.sock --huge-dir /dev/hugepages_1GB
> -w :04:10.4 -w :04:10.6 --master-lcore 1 --socket-mem
> 2048,2048
>
> However, VPP doesn't create interface at all
>
> vpp# show interface
>   Name   Idx   State  Counter
> Count
> VirtualFunctionEthernet4/10/4 1down
> VirtualFunctionEthernet4/10/6 2down
> local00down
>
> since it is a static mapping I am assuming it should be created, correct?
>
> Thanks.
>
> On Wed, May 30, 2018 at 3:43 PM, Steven Luong (sluong)  
> wrote:
>> Ravi,
>>
>> First and foremost, get rid of the feature-mask option. I don't know what 
>> 0x4040 does for you. If that does not help, try testing it with dpdk 
>> based vhost-user instead of VPP native vhost-user to make sure that they can 
>> work well with each other first. To use dpdk vhost-user, add a vdev command 
>> in the startup.conf for each vhost-user device that you have.
>>
>> dpdk { vdev net_vhost0,iface=/var/run/vpp/sock1.sock }
>>
>> dpdk based vhost-user interface is named VhostEthernet0, VhostEthernet1, 
>> etc. Make sure you use the right interface name to set the state to up.
>>
>> If dpdk based vhost-user does not work with testpmd either, it looks like 
>> some problem with the way that you invoke testpmd.
>>
>> If dpdk based vhost-user works well with the same testpmd device driver and 
>> not vpp native vhost-user, I can set up something similar to yours to look 
>> into it.
>>
>> The device driver, testpmd, is supposed to pass the shared memory region to 
>> VPP for TX/RX queues. It looks like VPP vhost-user might have run into a 
>> bump there with using the shared memory (txvq->avail).
>>
>> Steven
>>
>> PS. vhost-user is not an optimum interface for containers. You may want to 
>> look into using memif if you don't already know about it.
>>
>>
>> On 5/30/18, 2:06 PM, "Ravi Kerur"  wrote:
>>
>> I am not sure what is wrong with the setup or a bug in vpp, vpp
>> crashes with vhost<-->virtio communication.
>>
>> (1) Vhost-interfaces are created and attached to bridge-domain as follows
>>
>> create vhost socket /var/run/vpp/sock1.sock server feature-mask 
>> 0x4040
>> create vhost socket /var/run/vpp/sock2.sock server feature-mask 
>> 0x4040
>> set interface state VirtualEthernet0/0/0 up
>> set interface state VirtualEthernet0/0/1 up
>>
>> set interface l2 bridge VirtualEthernet0/0/0 1
>> set interface l2 bridge VirtualEthernet0/0/1 1
>>
>>
>> (2) DPDK/testpmd is started in a container to talk to vpp/vhost-user
>> interface as follows
>>
>> docker run -it --privileged -v
>> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
>> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
>> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
>> 4 --log-level=9 -m 64 --no-pci --single-file-segments
>> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
>> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
>> -i
>>
>> (3) show vhost-user VirtualEthernet0/0/1
>> Virtio vhost-user interfaces
>> Global:
>>   coalesce frames 32 time 1e-3
>>   number of rx virtqueues in interrupt mode: 0
>> Interface: VirtualEthernet0/0/1 (ifindex 4)
>> virtio_net_hdr_sz 10
>>  features mask (0x4040):
>>  features (0x0):
>>   protocol features (0x0)
>>
>>  socket filename 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-30 Thread steven luong
Ravi,

First and foremost, get rid of the feature-mask option. I don't know what 
0x4040 does for you. If that does not help, try testing it with dpdk based 
vhost-user instead of VPP native vhost-user to make sure that they can work 
well with each other first. To use dpdk vhost-user, add a vdev command in the 
startup.conf for each vhost-user device that you have.

dpdk { vdev net_vhost0,iface=/var/run/vpp/sock1.sock }

dpdk based vhost-user interface is named VhostEthernet0, VhostEthernet1, etc. 
Make sure you use the right interface name to set the state to up.

If dpdk based vhost-user does not work with testpmd either, it looks like some 
problem with the way that you invoke testpmd.

If dpdk based vhost-user works well with the same testpmd device driver and not 
vpp native vhost-user, I can set up something similar to yours to look into it.

The device driver, testpmd, is supposed to pass the shared memory region to VPP 
for TX/RX queues. It looks like VPP vhost-user might have run into a bump there 
with using the shared memory (txvq->avail). 

Steven

PS. vhost-user is not an optimum interface for containers. You may want to look 
into using memif if you don't already know about it.


On 5/30/18, 2:06 PM, "Ravi Kerur"  wrote:

I am not sure what is wrong with the setup or a bug in vpp, vpp
crashes with vhost<-->virtio communication.

(1) Vhost-interfaces are created and attached to bridge-domain as follows

create vhost socket /var/run/vpp/sock1.sock server feature-mask 0x4040
create vhost socket /var/run/vpp/sock2.sock server feature-mask 0x4040
set interface state VirtualEthernet0/0/0 up
set interface state VirtualEthernet0/0/1 up

set interface l2 bridge VirtualEthernet0/0/0 1
set interface l2 bridge VirtualEthernet0/0/1 1


(2) DPDK/testpmd is started in a container to talk to vpp/vhost-user
interface as follows

docker run -it --privileged -v
/var/run/vpp/sock1.sock:/var/run/usvhost1 -v
/var/run/vpp/sock2.sock:/var/run/usvhost2 -v
/dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
4 --log-level=9 -m 64 --no-pci --single-file-segments
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:00:00:01:01:01
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:00:00:01:01:02 --
-i

(3) show vhost-user VirtualEthernet0/0/1
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/1 (ifindex 4)
virtio_net_hdr_sz 10
 features mask (0x4040):
 features (0x0):
  protocol features (0x0)

 socket filename /var/run/vpp/sock2.sock type server errno "Success"

 rx placement:
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0
   thread 3 on vring 0
   thread 4 on vring 0

 Memory regions (total 1)
 region fdguest_phys_addrmemory_sizeuserspace_addr
mmap_offsetmmap_addr
 == = == == ==
== ==
  0 550x7ff7c000 0x4000 0x7ff7c000
0x 0x7ffbc000

vpp# show vhost-user VirtualEthernet0/0/0
Virtio vhost-user interfaces
Global:
  coalesce frames 32 time 1e-3
  number of rx virtqueues in interrupt mode: 0
Interface: VirtualEthernet0/0/0 (ifindex 3)
virtio_net_hdr_sz 10
 features mask (0x4040):
 features (0x0):
  protocol features (0x0)

 socket filename /var/run/vpp/sock1.sock type server errno "Success"

 rx placement:
 tx placement: spin-lock
   thread 0 on vring 0
   thread 1 on vring 0
   thread 2 on vring 0
   thread 3 on vring 0
   thread 4 on vring 0

 Memory regions (total 1)
 region fdguest_phys_addrmemory_sizeuserspace_addr
mmap_offsetmmap_addr
 == = == == ==
== ==
  0 510x7ff7c000 0x4000 0x7ff7c000
0x 0x7ffc

(4) vpp stack trace
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffd0e090700 (LWP 46570)]
0x77414642 in vhost_user_if_input
(mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
node=0x7fffb76bab00, qid=, vui=0x7fffb6739700,
vum=0x778f4480 , vm=0x7fffb672a9c0)
at 
/var/venom/vpp/build-data/../src/vnet/devices/virtio/vhost-user.c:1596
1596  if (PREDICT_FALSE (txvq->avail->flags & 0xFFFE))
(gdb) bt
#0  0x77414642 in vhost_user_if_input
(mode=VNET_HW_INTERFACE_RX_MODE_POLLING,
node=0x7fffb76bab00, qid=, vui=0x7fffb6739700,
vum=0x778f4480 , vm=0x7fffb672a9c0)

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-29 Thread Ravi Kerur
Steve,

Thanks for inputs on debugs and gdb. I am using gdb on my development
system to debug the issue. I would like to have reliable core
generation on the system on which I don't have access to install gdb.
I installed corekeeper and it still doesn't generate core. I am
running vpp inside a VM (VirtualBox/vagrant), not sure if I need to
set something inside vagrant config file.

 dpkg -l corekeeper
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version ArchitectureDescription
+++--===-===-==
ii  corekeeper   1.6 amd64   enable core
files and report crashes to the system

Thanks.

On Tue, May 29, 2018 at 9:38 AM, Steven Luong (sluong)  wrote:
> Ravi,
>
> I install corekeeper and the core file is kept in /var/crash. But why not use 
> gdb to attach to the VPP process?
> To turn on VPP vhost-user debug, type "debug vhost-user on" at the VPP prompt.
>
> Steven
>
> On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
>  wrote:
>
> Hi Marco,
>
>
> On Tue, May 29, 2018 at 6:30 AM, Marco Varlese  wrote:
> > Ravi,
> >
> > On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
> >> Hello,
> >>
> >> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
> >> installed VPP successfully on it. Later I have created vhost-user
> >> interfaces via
> >>
> >> create vhost socket /var/run/vpp/sock1.sock server
> >> create vhost socket /var/run/vpp/sock2.sock server
> >> set interface state VirtualEthernet0/0/0 up
> >> set interface state VirtualEthernet0/0/1 up
> >>
> >> set interface l2 bridge VirtualEthernet0/0/0 1
> >> set interface l2 bridge VirtualEthernet0/0/1 1
> >>
> >> I then run 'DPDK/testpmd' inside a container which will use
> >> virtio-user interfaces using the following command
> >>
> >> docker run -it --privileged -v
> >> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
> >> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
> >> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
> >> 4 --log-level=9 -m 64 --no-pci --single-file-segments
> >> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
> >> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
> >> -i
> >>
> >> VPP Vnet crashes with following message
> >>
> >> May 27 11:44:00 localhost vnet[6818]: received signal SIGSEGV, PC
> >> 0x7fcca4620187, faulting address 0x7fcb317ac000
> >>
> >> Questions:
> >> I have 'ulimit -c unlimited' and /etc/vpp/startup.conf has
> >> unix {
> >>   nodaemon
> >>   log /var/log/vpp/vpp.log
> >>   full-coredump
> >>   cli-listen /run/vpp/cli.sock
> >>   gid vpp
> >> }
> >>
> >> But I couldn't locate corefile?
> > The location of the coredump file depends on your system configuration.
> >
> > Please, check "cat /proc/sys/kernel/core_pattern"
> >
> > If you have systemd-coredump in the output of the above command, then 
> likely the
> > location of the coredump files is "/var/lib/systemd/coredump/"
> >
> > You can also change the location of where your system places the 
> coredump files:
> > echo '/PATH_TO_YOU_LOCATION/core_%e.%p' | sudo tee 
> /proc/sys/kernel/core_pattern
> >
> > See if that helps...
> >
>
> Initially '/proc/sys/kernel/core_pattern' was set to 'core'. I changed
> it to 'systemd-coredump'. Still no core generated. VPP crashes
>
> May 29 08:54:34 localhost vnet[4107]: received signal SIGSEGV, PC
> 0x7f0167751187, faulting address 0x7efff43ac000
> May 29 08:54:34 localhost systemd[1]: vpp.service: Main process
> exited, code=killed, status=6/ABRT
> May 29 08:54:34 localhost systemd[1]: vpp.service: Unit entered failed 
> state.
> May 29 08:54:34 localhost systemd[1]: vpp.service: Failed with result 
> 'signal'.
>
>
> cat /proc/sys/kernel/core_pattern
> systemd-coredump
>
>
> ulimit -a
> core file size  (blocks, -c) unlimited
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 15657
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 15657
> virtual memory

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-29 Thread steven luong
Ravi,

I install corekeeper and the core file is kept in /var/crash. But why not use 
gdb to attach to the VPP process? 
To turn on VPP vhost-user debug, type "debug vhost-user on" at the VPP prompt.

Steven

On 5/29/18, 9:10 AM, "vpp-dev@lists.fd.io on behalf of Ravi Kerur" 
 wrote:

Hi Marco,


On Tue, May 29, 2018 at 6:30 AM, Marco Varlese  wrote:
> Ravi,
>
> On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
>> Hello,
>>
>> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
>> installed VPP successfully on it. Later I have created vhost-user
>> interfaces via
>>
>> create vhost socket /var/run/vpp/sock1.sock server
>> create vhost socket /var/run/vpp/sock2.sock server
>> set interface state VirtualEthernet0/0/0 up
>> set interface state VirtualEthernet0/0/1 up
>>
>> set interface l2 bridge VirtualEthernet0/0/0 1
>> set interface l2 bridge VirtualEthernet0/0/1 1
>>
>> I then run 'DPDK/testpmd' inside a container which will use
>> virtio-user interfaces using the following command
>>
>> docker run -it --privileged -v
>> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
>> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
>> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
>> 4 --log-level=9 -m 64 --no-pci --single-file-segments
>> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
>> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
>> -i
>>
>> VPP Vnet crashes with following message
>>
>> May 27 11:44:00 localhost vnet[6818]: received signal SIGSEGV, PC
>> 0x7fcca4620187, faulting address 0x7fcb317ac000
>>
>> Questions:
>> I have 'ulimit -c unlimited' and /etc/vpp/startup.conf has
>> unix {
>>   nodaemon
>>   log /var/log/vpp/vpp.log
>>   full-coredump
>>   cli-listen /run/vpp/cli.sock
>>   gid vpp
>> }
>>
>> But I couldn't locate corefile?
> The location of the coredump file depends on your system configuration.
>
> Please, check "cat /proc/sys/kernel/core_pattern"
>
> If you have systemd-coredump in the output of the above command, then 
likely the
> location of the coredump files is "/var/lib/systemd/coredump/"
>
> You can also change the location of where your system places the coredump 
files:
> echo '/PATH_TO_YOU_LOCATION/core_%e.%p' | sudo tee 
/proc/sys/kernel/core_pattern
>
> See if that helps...
>

Initially '/proc/sys/kernel/core_pattern' was set to 'core'. I changed
it to 'systemd-coredump'. Still no core generated. VPP crashes

May 29 08:54:34 localhost vnet[4107]: received signal SIGSEGV, PC
0x7f0167751187, faulting address 0x7efff43ac000
May 29 08:54:34 localhost systemd[1]: vpp.service: Main process
exited, code=killed, status=6/ABRT
May 29 08:54:34 localhost systemd[1]: vpp.service: Unit entered failed 
state.
May 29 08:54:34 localhost systemd[1]: vpp.service: Failed with result 
'signal'.


cat /proc/sys/kernel/core_pattern
systemd-coredump


ulimit -a
core file size  (blocks, -c) unlimited
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 15657
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 15657
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

cd /var/lib/systemd/coredump/
root@localhost:/var/lib/systemd/coredump# ls
root@localhost:/var/lib/systemd/coredump#

>>
>> (2) How to enable debugs? I have used 'make build' but no additional
>> logs other than those shown below
>>
>>
>> VPP logs from /var/log/syslog is shown below
>> cat /var/log/syslog
>> May 27 11:40:28 localhost vpp[6818]: vlib_plugin_early_init:361:
>> plugin path /usr/lib/vpp_plugins:/usr/lib64/vpp_plugins
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: abf_plugin.so (ACL based Forwarding)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: acl_plugin.so (Access Control Lists)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: avf_plugin.so (Intel Adaptive Virtual Function (AVF) Device
>> Plugin)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:191: Loaded
>> plugin: cdp_plugin.so
>> May 27 11:40:28 localhost 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-29 Thread Ravi Kerur
Hi Marco,


On Tue, May 29, 2018 at 6:30 AM, Marco Varlese  wrote:
> Ravi,
>
> On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
>> Hello,
>>
>> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
>> installed VPP successfully on it. Later I have created vhost-user
>> interfaces via
>>
>> create vhost socket /var/run/vpp/sock1.sock server
>> create vhost socket /var/run/vpp/sock2.sock server
>> set interface state VirtualEthernet0/0/0 up
>> set interface state VirtualEthernet0/0/1 up
>>
>> set interface l2 bridge VirtualEthernet0/0/0 1
>> set interface l2 bridge VirtualEthernet0/0/1 1
>>
>> I then run 'DPDK/testpmd' inside a container which will use
>> virtio-user interfaces using the following command
>>
>> docker run -it --privileged -v
>> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
>> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
>> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
>> 4 --log-level=9 -m 64 --no-pci --single-file-segments
>> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
>> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
>> -i
>>
>> VPP Vnet crashes with following message
>>
>> May 27 11:44:00 localhost vnet[6818]: received signal SIGSEGV, PC
>> 0x7fcca4620187, faulting address 0x7fcb317ac000
>>
>> Questions:
>> I have 'ulimit -c unlimited' and /etc/vpp/startup.conf has
>> unix {
>>   nodaemon
>>   log /var/log/vpp/vpp.log
>>   full-coredump
>>   cli-listen /run/vpp/cli.sock
>>   gid vpp
>> }
>>
>> But I couldn't locate corefile?
> The location of the coredump file depends on your system configuration.
>
> Please, check "cat /proc/sys/kernel/core_pattern"
>
> If you have systemd-coredump in the output of the above command, then likely 
> the
> location of the coredump files is "/var/lib/systemd/coredump/"
>
> You can also change the location of where your system places the coredump 
> files:
> echo '/PATH_TO_YOU_LOCATION/core_%e.%p' | sudo tee 
> /proc/sys/kernel/core_pattern
>
> See if that helps...
>

Initially '/proc/sys/kernel/core_pattern' was set to 'core'. I changed
it to 'systemd-coredump'. Still no core generated. VPP crashes

May 29 08:54:34 localhost vnet[4107]: received signal SIGSEGV, PC
0x7f0167751187, faulting address 0x7efff43ac000
May 29 08:54:34 localhost systemd[1]: vpp.service: Main process
exited, code=killed, status=6/ABRT
May 29 08:54:34 localhost systemd[1]: vpp.service: Unit entered failed state.
May 29 08:54:34 localhost systemd[1]: vpp.service: Failed with result 'signal'.


cat /proc/sys/kernel/core_pattern
systemd-coredump


ulimit -a
core file size  (blocks, -c) unlimited
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 15657
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 15657
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

cd /var/lib/systemd/coredump/
root@localhost:/var/lib/systemd/coredump# ls
root@localhost:/var/lib/systemd/coredump#

>>
>> (2) How to enable debugs? I have used 'make build' but no additional
>> logs other than those shown below
>>
>>
>> VPP logs from /var/log/syslog is shown below
>> cat /var/log/syslog
>> May 27 11:40:28 localhost vpp[6818]: vlib_plugin_early_init:361:
>> plugin path /usr/lib/vpp_plugins:/usr/lib64/vpp_plugins
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: abf_plugin.so (ACL based Forwarding)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: acl_plugin.so (Access Control Lists)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: avf_plugin.so (Intel Adaptive Virtual Function (AVF) Device
>> Plugin)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:191: Loaded
>> plugin: cdp_plugin.so
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: flowprobe_plugin.so (Flow per Packet)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: gbp_plugin.so (Group Based Policy)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: gtpu_plugin.so (GTPv1-U)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: igmp_plugin.so (IGMP messaging)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
>> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
>> 

Re: [vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-29 Thread Marco Varlese
Ravi,

On Sun, 2018-05-27 at 12:20 -0700, Ravi Kerur wrote:
> Hello,
> 
> I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
> installed VPP successfully on it. Later I have created vhost-user
> interfaces via
> 
> create vhost socket /var/run/vpp/sock1.sock server
> create vhost socket /var/run/vpp/sock2.sock server
> set interface state VirtualEthernet0/0/0 up
> set interface state VirtualEthernet0/0/1 up
> 
> set interface l2 bridge VirtualEthernet0/0/0 1
> set interface l2 bridge VirtualEthernet0/0/1 1
> 
> I then run 'DPDK/testpmd' inside a container which will use
> virtio-user interfaces using the following command
> 
> docker run -it --privileged -v
> /var/run/vpp/sock1.sock:/var/run/usvhost1 -v
> /var/run/vpp/sock2.sock:/var/run/usvhost2 -v
> /dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
> 4 --log-level=9 -m 64 --no-pci --single-file-segments
> --vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
> --vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
> -i
> 
> VPP Vnet crashes with following message
> 
> May 27 11:44:00 localhost vnet[6818]: received signal SIGSEGV, PC
> 0x7fcca4620187, faulting address 0x7fcb317ac000
> 
> Questions:
> I have 'ulimit -c unlimited' and /etc/vpp/startup.conf has
> unix {
>   nodaemon
>   log /var/log/vpp/vpp.log
>   full-coredump
>   cli-listen /run/vpp/cli.sock
>   gid vpp
> }
> 
> But I couldn't locate corefile?
The location of the coredump file depends on your system configuration.

Please, check "cat /proc/sys/kernel/core_pattern"

If you have systemd-coredump in the output of the above command, then likely the
location of the coredump files is "/var/lib/systemd/coredump/"

You can also change the location of where your system places the coredump files:
echo '/PATH_TO_YOU_LOCATION/core_%e.%p' | sudo tee /proc/sys/kernel/core_pattern

See if that helps...

> 
> (2) How to enable debugs? I have used 'make build' but no additional
> logs other than those shown below
> 
> 
> VPP logs from /var/log/syslog is shown below
> cat /var/log/syslog
> May 27 11:40:28 localhost vpp[6818]: vlib_plugin_early_init:361:
> plugin path /usr/lib/vpp_plugins:/usr/lib64/vpp_plugins
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: abf_plugin.so (ACL based Forwarding)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: acl_plugin.so (Access Control Lists)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: avf_plugin.so (Intel Adaptive Virtual Function (AVF) Device
> Plugin)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:191: Loaded
> plugin: cdp_plugin.so
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: flowprobe_plugin.so (Flow per Packet)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: gbp_plugin.so (Group Based Policy)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: gtpu_plugin.so (GTPv1-U)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: igmp_plugin.so (IGMP messaging)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: ioam_plugin.so (Inbound OAM)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:117: Plugin
> disabled (default): ixge_plugin.so
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: l2e_plugin.so (L2 Emulation)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: lacp_plugin.so (Link Aggregation Control Protocol)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: lb_plugin.so (Load Balancer)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: memif_plugin.so (Packet Memory Interface (experimetal))
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: nat_plugin.so (Network Address Translation)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: pppoe_plugin.so (PPPoE)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
> May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: srv6as_plugin.so (Static SRv6 proxy)
> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: tlsmbedtls_plugin.so (mbedtls based TLS Engine)
> May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
> plugin: tlsopenssl_plugin.so (openssl 

[vpp-dev] VPP Vnet crash with vhost-user interface

2018-05-27 Thread Ravi Kerur
Hello,

I have a VM(16.04.4 Ubuntu x86_64) with 2 cores and 4G RAM. I have
installed VPP successfully on it. Later I have created vhost-user
interfaces via

create vhost socket /var/run/vpp/sock1.sock server
create vhost socket /var/run/vpp/sock2.sock server
set interface state VirtualEthernet0/0/0 up
set interface state VirtualEthernet0/0/1 up

set interface l2 bridge VirtualEthernet0/0/0 1
set interface l2 bridge VirtualEthernet0/0/1 1

I then run 'DPDK/testpmd' inside a container which will use
virtio-user interfaces using the following command

docker run -it --privileged -v
/var/run/vpp/sock1.sock:/var/run/usvhost1 -v
/var/run/vpp/sock2.sock:/var/run/usvhost2 -v
/dev/hugepages:/dev/hugepages dpdk-app-testpmd ./bin/testpmd -c 0x3 -n
4 --log-level=9 -m 64 --no-pci --single-file-segments
--vdev=virtio_user0,path=/var/run/usvhost1,mac=54:01:00:01:01:01
--vdev=virtio_user1,path=/var/run/usvhost2,mac=54:01:00:01:01:02 --
-i

VPP Vnet crashes with following message

May 27 11:44:00 localhost vnet[6818]: received signal SIGSEGV, PC
0x7fcca4620187, faulting address 0x7fcb317ac000

Questions:
I have 'ulimit -c unlimited' and /etc/vpp/startup.conf has
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
}

But I couldn't locate corefile?

(2) How to enable debugs? I have used 'make build' but no additional
logs other than those shown below


VPP logs from /var/log/syslog is shown below
cat /var/log/syslog
May 27 11:40:28 localhost vpp[6818]: vlib_plugin_early_init:361:
plugin path /usr/lib/vpp_plugins:/usr/lib64/vpp_plugins
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: abf_plugin.so (ACL based Forwarding)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: acl_plugin.so (Access Control Lists)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: avf_plugin.so (Intel Adaptive Virtual Function (AVF) Device
Plugin)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:191: Loaded
plugin: cdp_plugin.so
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: dpdk_plugin.so (Data Plane Development Kit (DPDK))
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: flowprobe_plugin.so (Flow per Packet)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: gbp_plugin.so (Group Based Policy)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: gtpu_plugin.so (GTPv1-U)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: igmp_plugin.so (IGMP messaging)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: ila_plugin.so (Identifier-locator addressing for IPv6)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: ioam_plugin.so (Inbound OAM)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:117: Plugin
disabled (default): ixge_plugin.so
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: l2e_plugin.so (L2 Emulation)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: lacp_plugin.so (Link Aggregation Control Protocol)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: lb_plugin.so (Load Balancer)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: memif_plugin.so (Packet Memory Interface (experimetal))
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: nat_plugin.so (Network Address Translation)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: pppoe_plugin.so (PPPoE)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: srv6ad_plugin.so (Dynamic SRv6 proxy)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: srv6am_plugin.so (Masquerading SRv6 proxy)
May 27 11:40:28 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: srv6as_plugin.so (Static SRv6 proxy)
May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: stn_plugin.so (VPP Steals the NIC for Container integration)
May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: tlsmbedtls_plugin.so (mbedtls based TLS Engine)
May 27 11:40:29 localhost vpp[6818]: load_one_plugin:189: Loaded
plugin: tlsopenssl_plugin.so (openssl based TLS Engine)
May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
load_one_vat_plugin:67: Loaded plugin: dpdk_test_plugin.so
May 27 11:40:29 localhost /usr/bin/vpp[6818]: load_one_vat_plugin:67:
Loaded plugin: dpdk_test_plugin.so
May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
load_one_vat_plugin:67: Loaded plugin: lb_test_plugin.so
May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
load_one_vat_plugin:67: Loaded plugin: flowprobe_test_plugin.so
May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
load_one_vat_plugin:67: Loaded plugin: stn_test_plugin.so
May 27 11:40:29 localhost vpp[6818]: /usr/bin/vpp[6818]:
load_one_vat_plugin:67: Loaded plugin: nat_test_plugin.so
May 27