[dpdk-users] net/i40e on VMware issues

2018-03-01 Thread Yan Fridland
Hello Guys,

I am trying to work with Intel's X710 i40e on VMware host on Centos VM and 
experience issues I never saw with 82599 ixgbe.
Is there any special configuration in dpdk compilation or runtime I should do?

Any assistance will be very appreciated.

Thank you,
Yan




Re: [dpdk-users] AVX-512

2018-03-01 Thread Tharaneedharan Vilwanathan
Hi Stephen,

Appreciate your response.

I have ESXi. I am still trying to map the KVM config with ESXi but no luck
yet. I have requested the admin to figure out. If anyone is familiar with
ESXi configuration for this, please let me know.

Thanks
dharani

On Fri, Feb 23, 2018 at 11:38 AM, Stephen Hemminger <
step...@networkplumber.org> wrote:

> On Fri, 23 Feb 2018 11:16:14 -0800
> Tharaneedharan Vilwanathan  wrote:
>
> > Hi All,
> >
> > I have a quick question. I got a new server with the CPU that should have
> > AVX-512 support but I don't see it in Linux (Ubuntu 18.04).
> >
> > Here is the output:
> >
> > auto@auto-virtual-machine:~$ *cat /proc/cpuinfo*
> > processor : 0
> > vendor_id : GenuineIntel
> > cpu family : 6
> > model : 85
> > model name : *Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz*
> > stepping : 4
> > microcode : 0x229
> > cpu MHz : 2095.078
> > cache size : 22528 KB
> > physical id : 0
> > siblings : 2
> > core id : 0
> > cpu cores : 2
> > apicid : 0
> > initial apicid : 0
> > fpu : yes
> > fpu_exception : yes
> > cpuid level : 22
> > wp : yes
> > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> > pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm
> > constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable
> nonstop_tsc
> > cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt
> > tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm
> > 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase tsc_adjust bmi1 hle
> > avx2 smep bmi2 invpcid rtm rdseed adx smap xsaveopt arat
> > bugs : cpu_meltdown spectre_v1 spectre_v2
> > bogomips : 4190.15
> > clflush size : 64
> > cache_alignment : 64
> > address sizes : 42 bits physical, 48 bits virtual
> > power management:
> >
> > Can someone tell me why I don't see AVX-512?
> >
> > Appreciate your help.
> >
> > Thanks
> > dharani
>
> If you are running in a VM, make sure that your hypervisor passes the avx
> flags through.
> In KVM this is usually done with the "Copy host CPU configuration" in virt
> manager.
>


Re: [dpdk-users] Multi-process recovery (is it even possible?)

2018-03-01 Thread Tan, Jianfeng


> -Original Message-
> From: Lazarenko, Vlad (WorldQuant)
> [mailto:vlad.lazare...@worldquant.com]
> Sent: Thursday, March 1, 2018 10:53 PM
> To: Tan, Jianfeng; 'users@dpdk.org'
> Subject: RE: Multi-process recovery (is it even possible?)
> 
> Hello Jianfeng,
> 
> Thanks for getting back to me.  I thought about using "udata64", too. But that
> didn't work for me if a single packet was fanned out to multiple slave
> processes.  But most importantly, it looks like if a slave process crashes
> somewhere in the middle of getting or putting packets from/to a pool, we
> could end up with a deadlock. So I guess I'd have to think about a different
> design or be ready to bounce all of the processes if one of them fails.

OK, a better design to avoid such hard issue is good way to go. Good luck!

Thanks,
Jianfeng

> 
> Thanks,
> Vlad
> 
> > -Original Message-
> > From: Tan, Jianfeng [mailto:jianfeng@intel.com]
> > Sent: Thursday, March 01, 2018 3:20 AM
> > To: Lazarenko, Vlad (WorldQuant); 'users@dpdk.org'
> > Subject: RE: Multi-process recovery (is it even possible?)
> >
> >
> >
> > > -Original Message-
> > > From: users [mailto:users-boun...@dpdk.org] On Behalf Of Lazarenko,
> > > Vlad
> > > (WorldQuant)
> > > Sent: Thursday, March 1, 2018 2:54 AM
> > > To: 'users@dpdk.org'
> > > Subject: [dpdk-users] Multi-process recovery (is it even possible?)
> > >
> > > Guys,
> > >
> > > I am looking for possible solutions for the following problems that
> > > come along with asymmetric multi-process architecture...
> > >
> > > Given multiple processes share the same RX/TX queue(s) and packet
> > > pool(s) and the possibility of one packet from RX queue being fanned
> > > out to multiple slave processes, is there a way to recover from slave
> > > crashing (or exits w/o cleaning up properly)? In theory it could have
> > > incremented mbuf reference count more than once and unless
> everything
> > > is restarted, I don't see a reliable way to release those mbufs back to 
> > > the
> > pool.
> >
> > Recycle an element is too difficult; from what I know, it's next to 
> > impossible.
> > To recycle a memzone/mempool is easier. So in your case, you might want
> to
> > use different pools for different queues (processes).
> >
> > If you really want to recycle an element, rte_mbuf in your case, it might be
> > doable by:
> > 1. set up rx callback for each process, and in the callback, store a 
> > special flag
> > at rte_mbuf->udata64.
> > 2. when the primary to detect a secondary is down, we iterate all element
> > with the special flag, and put them back into the ring.
> >
> > There is small chance to fail that , mbuf is allocated by a secondary 
> > process,
> > and before it's flagged, it crashes.
> >
> > Thanks,
> > Jianfeng
> >
> >
> > >
> > > Also, if spinlock is involved and either master or slave crashes,
> > > everything simply gets stuck. Is there any way to detect this (i.e. 
> > > outside
> of
> > data path)..?
> > >
> > > Thanks,
> > > Vlad
> > >
> 
> 
> 
> ##
> #
> 
> The information contained in this communication is confidential, may be
> 
> subject to legal privilege, and is intended only for the individual named.
> 
> If you are not the named addressee, please notify the sender immediately
> and
> 
> delete this email from your system.  The views expressed in this email are
> 
> the views of the sender only.  Outgoing and incoming electronic
> communications
> 
> to this address are electronically archived and subject to review and/or
> disclosure
> 
> to someone other than the recipient.
> 
> ##
> #



[dpdk-users] Intel I350 [8086:1521] igb PMD VLAN offload transmition issue?

2018-03-01 Thread James Huang
Hello,

We're experiencing PMD VLAN offload transmition issue on Intel I350
[8086:1521] NIC.

OS: CentOS 6.5 x64
DPDK 17.05.2
NIC firmware check with Linux ethtool, firmware-version: 1.67, 0x8dd0,
17.5.10

The VLAN offload receiving portion works fine, could get mbuf
ol_flags=0x1c1, with correct vlan_tci value.

However, on sending time, when set mbuf ol_flags=0x200 and
vlan_tci value, there is no packet going out on the wire. in the meantime,
dpdk-pdump tool could capture the outgoing packet without vlan tagging.

This issue does not happen if switch the NIC from I350 to X540 10G
interface.

And, if does not use VLAN, I350 works fine, too.

Is the I350 NIC hardware compatible issue, or something else?
Is possible to fix the issue?

Regards,
James Huang


Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: [dpdk-users] Multi-process recovery (is it even possible?)

2018-03-01 Thread Lazarenko, Vlad (WorldQuant)
Hello Jianfeng,

Thanks for getting back to me.  I thought about using "udata64", too. But that 
didn't work for me if a single packet was fanned out to multiple slave 
processes.  But most importantly, it looks like if a slave process crashes 
somewhere in the middle of getting or putting packets from/to a pool, we could 
end up with a deadlock. So I guess I'd have to think about a different design 
or be ready to bounce all of the processes if one of them fails.

Thanks,
Vlad

> -Original Message-
> From: Tan, Jianfeng [mailto:jianfeng@intel.com]
> Sent: Thursday, March 01, 2018 3:20 AM
> To: Lazarenko, Vlad (WorldQuant); 'users@dpdk.org'
> Subject: RE: Multi-process recovery (is it even possible?)
> 
> 
> 
> > -Original Message-
> > From: users [mailto:users-boun...@dpdk.org] On Behalf Of Lazarenko,
> > Vlad
> > (WorldQuant)
> > Sent: Thursday, March 1, 2018 2:54 AM
> > To: 'users@dpdk.org'
> > Subject: [dpdk-users] Multi-process recovery (is it even possible?)
> >
> > Guys,
> >
> > I am looking for possible solutions for the following problems that
> > come along with asymmetric multi-process architecture...
> >
> > Given multiple processes share the same RX/TX queue(s) and packet
> > pool(s) and the possibility of one packet from RX queue being fanned
> > out to multiple slave processes, is there a way to recover from slave
> > crashing (or exits w/o cleaning up properly)? In theory it could have
> > incremented mbuf reference count more than once and unless everything
> > is restarted, I don't see a reliable way to release those mbufs back to the
> pool.
> 
> Recycle an element is too difficult; from what I know, it's next to 
> impossible.
> To recycle a memzone/mempool is easier. So in your case, you might want to
> use different pools for different queues (processes).
> 
> If you really want to recycle an element, rte_mbuf in your case, it might be
> doable by:
> 1. set up rx callback for each process, and in the callback, store a special 
> flag
> at rte_mbuf->udata64.
> 2. when the primary to detect a secondary is down, we iterate all element
> with the special flag, and put them back into the ring.
> 
> There is small chance to fail that , mbuf is allocated by a secondary process,
> and before it's flagged, it crashes.
> 
> Thanks,
> Jianfeng
> 
> 
> >
> > Also, if spinlock is involved and either master or slave crashes,
> > everything simply gets stuck. Is there any way to detect this (i.e. outside 
> > of
> data path)..?
> >
> > Thanks,
> > Vlad
> >



###

The information contained in this communication is confidential, may be

subject to legal privilege, and is intended only for the individual named.

If you are not the named addressee, please notify the sender immediately and

delete this email from your system.  The views expressed in this email are

the views of the sender only.  Outgoing and incoming electronic communications

to this address are electronically archived and subject to review and/or 
disclosure

to someone other than the recipient.

###



[dpdk-users] 答复: RE: Compile DPDK 17.11 with Mellanox NIC support

2018-03-01 Thread wang.yong19
Thanks for your advice.
It worked after I add the "--upstream-libs" parameter.
I really appreciate your help.

Best regards,
Wang Yong

--oringin--
from:RaslanDarawsheh 
to:汪勇10032886;george@gmail.com 
cc:Nélio Laranjeiro users@dpdk.org 
date:2018年03月01日 21:25
subject:RE: [dpdk-users]Compile DPDK 17.11 with Mellanox NIC support
Hi,

I believe you are missing the --upstream-libs from OFED installation:

Try it as following:
./mlnxofedinstall --skip-distro-check --dpdk --add-kernel-support 
--upstream-libs /etc/init.d/openibd restart

Then rebuild your DPDK.

Kindest regards,
Raslan Darawsheh

-Original Message-
From: users [mailto:users-boun...@dpdk.org] On Behalf Of wang.yon...@zte.com.cn
Sent: Thursday, March 1, 2018 3:19 PM
To: george@gmail.com
Cc: Nélio Laranjeiro ; users@dpdk.org
Subject: Re: [dpdk-users] Compile DPDK 17.11 with Mellanox NIC support

Hi,
I met the same problem.  My DPDK build environment is "CentOS7.0 + kernel 
3.10.0-229.el7.x86_64".
I have download the OFED 4.2 and use the following command to install:
./mlnxofedinstall --skip-distro-check --dpdk --add-kernel-support 
/etc/init.d/openibd restart However, I still met error while I am compiling 
DPDK-17.11. The error information is :

In file included from /home/wangyong/dpdk-17.11/drivers/net/mlx4/mlx4.c:71:0:
/home/wangyong/dpdk-17.11/drivers/net/mlx4/mlx4_rxtx.h:44:31: fatal error: 
infiniband/mlx4dv.h: No such file or directory  #include 
^
compilation terminated.
make[6]: *** [mlx4.o] Error 1
make[5]: *** [mlx4] Error 2
make[4]: *** [net] Error 2
make[3]: *** [drivers] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2

I checked the "/usr/include/infiniband" directory and I couldn't find the 
mlx4dv.h file.
Are there any other configurations or steps required? Or the kernel version of 
linux can't be 3.10 ?



--origin--
from:george@gmail.com 
to:Nelio Laranjeiro 
cc:users@dpdk.org 
date:2018年01月05日 16:03
subject:Re: [dpdk-users] Compile DPDK 17.11 with Mellanox NIC support Hi Nelio,

Thanks for the prompt reply and guidance!
My OFED version was 4.1, that's why I was unable to build DPDK 17.11.
With OFED 4.2 everything works on my kernel (v4.4)!

I have one more question: What is the suggested OFED version in order to have a 
fully featured NIC with all the DPDK classification/offloading capabilities?
My NICs are 100G ConnectX-4 (dual and single port).

Best regards,
Georgios

On Fri, Jan 5, 2018 at 8:46 AM, Nelio Laranjeiro  wrote:

> Hi George
>
> On Fri, Jan 05, 2018 at 08:38:11AM +0100, george@gmail.com wrote:
> > Hi,
> >
> > I noticed that DPDK 17.11 brought some changes regarding the
> > Mellanox drivers, hence my compilation script does not work for this
> > version of
> DPDK.
> >
> > Specifically, the release notes of this version state:
> >
> > "Enabled the PMD to run on top of upstream Linux kernel and
> > rdma-core
> libs,
> > removing the dependency on specific Mellanox OFED libraries."
> >
> > Previously, I was setting CONFIG_RTE_LIBRTE_MLX5_PMD but now, with
> > this option set I get build errors in drivers/net/mlx5/mlx5.c.
> >
> > Could you please provide some guidance on how DPDK is expected to
> > work without OFED?
> > Is there any other configuration required?
>
> You should find anything useful in the NIC documentation [1], you have
> two possibilities, keep using the distribution and install MLNX_OFED
> 4.2, or update the Linux Kernel and use RDMA-Core, both are explained
> in he section "21.5.1. Installation".
>
> Regards,
>
> [1]
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdpd
> k.org%2Fdoc%2Fguides%2Fnics%2Fmlx5.html=02%7C01%7Crasland%40mella
> nox.com%7Cb975ffe6ec1e4dd22b4808d57f76feb3%7Ca652971c7d2e4d9ba6a4d1492
> 56f461b%7C0%7C0%7C636555071431060900=EOrSSjM9A7MsoWpvQOtW%2FzxnY
> wXaULG53hU%2BQjA4XK8%3D=0
>
> --
> Nélio Laranjeiro
> 6WIND
>



--
Georgios Katsikas
Industrial Ph.D. Student
Network Intelligence Group
Decision, Networks, and Analytics (DNA) Lab RISE SICS
E-Mail:  georgios.katsi...@ri.se

Re: [dpdk-users] Compile DPDK 17.11 with Mellanox NIC support

2018-03-01 Thread Raslan Darawsheh
Hi,

I believe you are missing the --upstream-libs from OFED installation:

Try it as following:
./mlnxofedinstall --skip-distro-check --dpdk --add-kernel-support 
--upstream-libs /etc/init.d/openibd restart 

Then rebuild your DPDK.

Kindest regards,
Raslan Darawsheh

-Original Message-
From: users [mailto:users-boun...@dpdk.org] On Behalf Of wang.yon...@zte.com.cn
Sent: Thursday, March 1, 2018 3:19 PM
To: george@gmail.com
Cc: Nélio Laranjeiro ; users@dpdk.org
Subject: Re: [dpdk-users] Compile DPDK 17.11 with Mellanox NIC support

Hi,
I met the same problem.  My DPDK build environment is "CentOS7.0 + kernel 
3.10.0-229.el7.x86_64".
I have download the OFED 4.2 and use the following command to install:
./mlnxofedinstall --skip-distro-check --dpdk --add-kernel-support 
/etc/init.d/openibd restart However, I still met error while I am compiling 
DPDK-17.11. The error information is :

In file included from /home/wangyong/dpdk-17.11/drivers/net/mlx4/mlx4.c:71:0:
/home/wangyong/dpdk-17.11/drivers/net/mlx4/mlx4_rxtx.h:44:31: fatal error: 
infiniband/mlx4dv.h: No such file or directory  #include 
   ^
compilation terminated.
make[6]: *** [mlx4.o] Error 1
make[5]: *** [mlx4] Error 2
make[4]: *** [net] Error 2
make[3]: *** [drivers] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2

I checked the "/usr/include/infiniband" directory and I couldn't find the 
mlx4dv.h file.
Are there any other configurations or steps required? Or the kernel version of 
linux can't be 3.10 ?



--origin--
from:george@gmail.com 
to:Nelio Laranjeiro 
cc:users@dpdk.org 
date:2018年01月05日 16:03
subject:Re: [dpdk-users] Compile DPDK 17.11 with Mellanox NIC support Hi Nelio,

Thanks for the prompt reply and guidance!
My OFED version was 4.1, that's why I was unable to build DPDK 17.11.
With OFED 4.2 everything works on my kernel (v4.4)!

I have one more question: What is the suggested OFED version in order to have a 
fully featured NIC with all the DPDK classification/offloading capabilities?
My NICs are 100G ConnectX-4 (dual and single port).

Best regards,
Georgios

On Fri, Jan 5, 2018 at 8:46 AM, Nelio Laranjeiro  wrote:

> Hi George
>
> On Fri, Jan 05, 2018 at 08:38:11AM +0100, george@gmail.com wrote:
> > Hi,
> >
> > I noticed that DPDK 17.11 brought some changes regarding the 
> > Mellanox drivers, hence my compilation script does not work for this 
> > version of
> DPDK.
> >
> > Specifically, the release notes of this version state:
> >
> > "Enabled the PMD to run on top of upstream Linux kernel and 
> > rdma-core
> libs,
> > removing the dependency on specific Mellanox OFED libraries."
> >
> > Previously, I was setting CONFIG_RTE_LIBRTE_MLX5_PMD but now, with 
> > this option set I get build errors in drivers/net/mlx5/mlx5.c.
> >
> > Could you please provide some guidance on how DPDK is expected to 
> > work without OFED?
> > Is there any other configuration required?
>
> You should find anything useful in the NIC documentation [1], you have 
> two possibilities, keep using the distribution and install MLNX_OFED 
> 4.2, or update the Linux Kernel and use RDMA-Core, both are explained 
> in he section "21.5.1. Installation".
>
> Regards,
>
> [1] 
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdpd
> k.org%2Fdoc%2Fguides%2Fnics%2Fmlx5.html=02%7C01%7Crasland%40mella
> nox.com%7Cb975ffe6ec1e4dd22b4808d57f76feb3%7Ca652971c7d2e4d9ba6a4d1492
> 56f461b%7C0%7C0%7C636555071431060900=EOrSSjM9A7MsoWpvQOtW%2FzxnY
> wXaULG53hU%2BQjA4XK8%3D=0
>
> --
> Nélio Laranjeiro
> 6WIND
>



--
Georgios Katsikas
Industrial Ph.D. Student
Network Intelligence Group
Decision, Networks, and Analytics (DNA) Lab RISE SICS
E-Mail:  georgios.katsi...@ri.se


Re: [dpdk-users] Compile DPDK 17.11 with Mellanox NIC support

2018-03-01 Thread wang.yong19
Hi,
I met the same problem.  My DPDK build environment is "CentOS7.0 + kernel 
3.10.0-229.el7.x86_64".
I have download the OFED 4.2 and use the following command to install:
./mlnxofedinstall --skip-distro-check --dpdk --add-kernel-support
/etc/init.d/openibd restart
However, I still met error while I am compiling DPDK-17.11. The error 
information is :

In file included from /home/wangyong/dpdk-17.11/drivers/net/mlx4/mlx4.c:71:0:
/home/wangyong/dpdk-17.11/drivers/net/mlx4/mlx4_rxtx.h:44:31: fatal error: 
infiniband/mlx4dv.h: No such file or directory
 #include 
   ^
compilation terminated.
make[6]: *** [mlx4.o] Error 1
make[5]: *** [mlx4] Error 2
make[4]: *** [net] Error 2
make[3]: *** [drivers] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2

I checked the "/usr/include/infiniband" directory and I couldn't find the 
mlx4dv.h file.
Are there any other configurations or steps required? Or the kernel version of 
linux can't be 3.10 ?



--origin--
from:george@gmail.com 
to:Nelio Laranjeiro 
cc:users@dpdk.org 
date:2018年01月05日 16:03
subject:Re: [dpdk-users] Compile DPDK 17.11 with Mellanox NIC support
Hi Nelio,

Thanks for the prompt reply and guidance!
My OFED version was 4.1, that's why I was unable to build DPDK 17.11.
With OFED 4.2 everything works on my kernel (v4.4)!

I have one more question: What is the suggested OFED version in order to
have a fully featured NIC with all the DPDK classification/offloading
capabilities?
My NICs are 100G ConnectX-4 (dual and single port).

Best regards,
Georgios

On Fri, Jan 5, 2018 at 8:46 AM, Nelio Laranjeiro  wrote:

> Hi George
>
> On Fri, Jan 05, 2018 at 08:38:11AM +0100, george@gmail.com wrote:
> > Hi,
> >
> > I noticed that DPDK 17.11 brought some changes regarding the Mellanox
> > drivers, hence my compilation script does not work for this version of
> DPDK.
> >
> > Specifically, the release notes of this version state:
> >
> > "Enabled the PMD to run on top of upstream Linux kernel and rdma-core
> libs,
> > removing the dependency on specific Mellanox OFED libraries."
> >
> > Previously, I was setting CONFIG_RTE_LIBRTE_MLX5_PMD but now, with this
> > option set I get build errors in drivers/net/mlx5/mlx5.c.
> >
> > Could you please provide some guidance on how DPDK is expected to work
> > without OFED?
> > Is there any other configuration required?
>
> You should find anything useful in the NIC documentation [1], you have
> two possibilities, keep using the distribution and install MLNX_OFED
> 4.2, or update the Linux Kernel and use RDMA-Core, both are explained in
> he section "21.5.1. Installation".
>
> Regards,
>
> [1] https://dpdk.org/doc/guides/nics/mlx5.html
>
> --
> Nélio Laranjeiro
> 6WIND
>



--
Georgios Katsikas
Industrial Ph.D. Student
Network Intelligence Group
Decision, Networks, and Analytics (DNA) Lab
RISE SICS
E-Mail:  georgios.katsi...@ri.se

[dpdk-users] PMD for Broadcom/Emulex OCe14000 OCP Skyhawk-R

2018-03-01 Thread sujith sankar
Hi all,

Is PMD for Broadcom/Emulex OCe14000 OCP Skyhawk-R available?  There
are a few documents in Broadcom's site.  But could not find the source
code of it.

I believe 6Wind team developed the PMD for Broadcom.  But what is the
status of it?  Is it freely available?

Could you please help me with info on this?

Thanks,
-Sujith


Re: [dpdk-users] Multi-process recovery (is it even possible?)

2018-03-01 Thread Tan, Jianfeng


> -Original Message-
> From: users [mailto:users-boun...@dpdk.org] On Behalf Of Lazarenko, Vlad
> (WorldQuant)
> Sent: Thursday, March 1, 2018 2:54 AM
> To: 'users@dpdk.org'
> Subject: [dpdk-users] Multi-process recovery (is it even possible?)
> 
> Guys,
> 
> I am looking for possible solutions for the following problems that come
> along with asymmetric multi-process architecture...
> 
> Given multiple processes share the same RX/TX queue(s) and packet pool(s)
> and the possibility of one packet from RX queue being fanned out to multiple
> slave processes, is there a way to recover from slave crashing (or exits w/o
> cleaning up properly)? In theory it could have incremented mbuf reference
> count more than once and unless everything is restarted, I don't see a
> reliable way to release those mbufs back to the pool.

Recycle an element is too difficult; from what I know, it's next to impossible. 
To recycle a memzone/mempool is easier. So in your case, you might want to use 
different pools for different queues (processes).

If you really want to recycle an element, rte_mbuf in your case, it might be 
doable by:
1. set up rx callback for each process, and in the callback, store a special 
flag at rte_mbuf->udata64.
2. when the primary to detect a secondary is down, we iterate all element with 
the special flag, and put them back into the ring.

There is small chance to fail that , mbuf is allocated by a secondary process, 
and before it's flagged, it crashes.

Thanks,
Jianfeng


> 
> Also, if spinlock is involved and either master or slave crashes, everything
> simply gets stuck. Is there any way to detect this (i.e. outside of data 
> path)..?
> 
> Thanks,
> Vlad
> 
> 
> 
> ##
> #
> 
> The information contained in this communication is confidential, may be
> 
> subject to legal privilege, and is intended only for the individual named.
> 
> If you are not the named addressee, please notify the sender immediately
> and
> 
> delete this email from your system.  The views expressed in this email are
> 
> the views of the sender only.  Outgoing and incoming electronic
> communications
> 
> to this address are electronically archived and subject to review and/or
> disclosure
> 
> to someone other than the recipient.
> 
> ##
> #


[dpdk-users] L3 Forwarding Exact Match Timeout

2018-03-01 Thread Ali Volkan Atli
Hi 

I'm trying to understand l3fwd application for exact match case. It only adds 
keys into hash but never deletes them. So I just want to delete entries 
according to a timeout by iterating through the hash table. But I have many 
entries and I don't want to iterate the hash table in a lump, I need to sweep 
it piece by piece in another thread during adding continuing. So does the 
following pseudo code work in multithread cases?

adding_thread
{
while (1) {
...
rte_hash_add_key_data(handle, key, data); // data has a last_seen 
timeout
...
}
}

sweeping_thread
{
static uint32_t sweep_iter = 0;
const void *next_key;
void *next_data;

for (int i = 0; i < SWEEP_CNT; ++i) {
...
rte_hash_iterate(handle, _key, _data, _iter)
sweep_iter = (sweep_iter + 1) & HASH_MASK
if (current_time - next_data->timeout > TIMEOUT)
rte_hash_del_key(handle, (void *)_key);
...
}
}


Thanks in advance.

- Volkan