Re: [vpp-dev] dpdk: switch to in-memory mode, deprecate use of socket-mem

2018-12-30 Thread Kingwel Xie
Hi Dave and Damjan, Yes, as I confirmed before, the patch works, but I just discovered a side effect introduced by this patch. That is, DPDK in-memory implys no shared config, then rte_eal_init/rte_eal_config_create will not check if another DPDK process is running. Therefore, we could run

Re: [vpp-dev] NAT handoff mechanism

2018-12-30 Thread Ole Troan
David, I believe you are right, it is possible to get into a dead lock between worker A and B in the case where A is waiting for B and B is waiting for A. We solved that by tail drop. Another way of doing that is by using a separate handover worker from NAT worker. We are exploring some new

Re: [vpp-dev] NAT handoff mechanism

2018-12-30 Thread david . leitch . vpp
*Hi Damjan* *Thnaks for your answer,* *I think it is possible (I encountered this problem) to have deadlock even when I use separate CPU for handoff and NAT processing, Or same CPU with congestion drop mechanism.* *I studied the code and separate the CPUs of handoff and NAT processing, in

Re: [vpp-dev] found some issue in pci vfio

2018-12-30 Thread Damjan Marion via Lists.Fd.Io
Fact that you can open vfio container doesn't actually mean that you can use vfio-pci module on specific PCI device. vfio-pci module can be used in 2 cases: - when /sys/bus/pci/devices//iommu_group exists - when /sys/module/vfio/parameters/enable_unsafe_noiommu_mode is set to Y (introduced

Re: [vpp-dev] NAT handoff mechanism

2018-12-30 Thread Damjan Marion via Lists.Fd.Io
> On 29 Dec 2018, at 07:26, david.leitch@gmail.com wrote: > > Hi ... > > I know that we need handoff mechanism when running multithread because > traffic for specific inside network user must be processed always on same > thread in both directions and We can not remove handoff node from