[vpp-dev] NAT handoff mechanism

2018-12-28 Thread david . leitch . vpp
Hi ...
 
I know that we need handoff mechanism when running multithread because traffic 
for specific inside network user must be processed always on same thread in 
both directions and We can not remove handoff node from NAT nodes ,because 
handoff is faster than locking mechanism.
 
So the problem is potential dead lock when two workers waiting for each other!! 
 for example Worker A and B, A is going to handoff to B but unfortunately at 
the same time B has the same thing to do to A, then they are both waiting 
forever. your solution In VPP 19.01 (or 18.10) is dropping packets when the 
queue is full (congestion drop)

first question is how you can detect congestion on queue ?

and What happen if we have separate CPUs for Handoff node and nat processing 
node and do not the same job handoff nad natting on single CPU core.
for example CPU core A nodes are ( dpdk-input -> ... -> NAT handoff ) and CPU 
core B nodes are ( nat44-in2out -> ... -> ip4-lookup -> interface-output) , in 
this situation worker A wait for worker B but worker B never wait for worker A.
Is it true to say we never have potential dead lock if we have seprate CPUs ?
If yes, why you use a same single CPU for both nat and handoff ? The separate 
CPUs can not solve the deadlock ?

 
 
what happen if we have seprated CPU core for Handoff nodes and nat processing 
and do not the same job handoff nad natiing on single CPU core.
for exapmle CPU core A nodes are ( dpdk-input -> ... -> NAT handoff ) and CPU 
core B nodes are ( nat44-in2out -> ...  -> ip4-lookup -> interface-output) , in 
this situation worker A wait for worker B but worker B never wait for worker A.
so Is it true to say we never have potential dead lock if we have seprated CPU 
???
If yes, why you use a cpu for both nat and handoff ? seprated CPU can not solve 
the potential dead lock ?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11794): https://lists.fd.io/g/vpp-dev/message/11794
Mute This Topic: https://lists.fd.io/mt/28878687/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Simple Rate Limit and QoS #vpp

2018-12-28 Thread carlito nueno
I am looking for rate limiting (bandwidth/traffic shaping) as well.


Vakili, Did you figure it out?

Thanks.
On Sat, Sep 8, 2018 at 12:16 AM  wrote:

> Simple Rate Limit and QoS
> Hi dears. Three questions please:
> 1: How can I configure an interface to let pass limited rate (Bandwidth
> management) in VPP
> 2: Can I give rage of IPs to assign limit rates?
> 3: Is there any way to not restart vpp or reload interface configuration
> for any configuration? I need to save and run without restart.
>
> Thanks alot
> Vakili -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#10441): https://lists.fd.io/g/vpp-dev/message/10441
> Mute This Topic: https://lists.fd.io/mt/25362068/675621
> Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480478
> Group Owner: vpp-dev+ow...@lists.fd.io
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [carlitonu...@gmail.com]
> -=-=-=-=-=-=-=-=-=-=-=-
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11793): https://lists.fd.io/g/vpp-dev/message/11793
Mute This Topic: https://lists.fd.io/mt/25362068/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] found some issue in pci vfio

2018-12-28 Thread Yu, Ping
I submitted a patch to fix it.

https://gerrit.fd.io/r/16640

Ping

From: Yu, Ping
Sent: Saturday, December 29, 2018 10:43 AM
To: vpp-dev@lists.fd.io
Cc: Yu, Ping 
Subject: found some issue in pci vfio

Hello, all

Recently I met some issue in vfio, and below is some root cause:

Before running vpp, I will use "dpdk-devbind.py" to bind the NIC to vfio-pci, 
and then use "uio-driver auto" configure, and it once worked well, but it has 
problem recently, so I took a look at this to resolve the problem.
I found that vpp has some problem to "auto" detect the uio-driver to be vfio, 
and the bug info is below:

1)  vlib_pci_bind_to_uio is dependent vlib_pci_get_device_info to tell 
iommu_group

2)  vlib_pci_get_device_info will check whether lvm->container_fd == -1

In my case, lvm->container_fd is initialized to be "-1", and there is no chance 
to modify it, so in the eye of vlib_pci_bind_to_uio, iommu_group is -1, then it 
will try to enable noiommu mode. If some kernel enabled NOIOMMU, then vfio can 
continue work with noiommu mode, but if not, then this device will be rejected 
due to "no VFIO" support.  (pci.c # 411. )

The solution is to drop "auto" mode, and explicitly configure "vfio-pci" in the 
configuration, but I hope the default "auto" mode can be smarter.

Ping

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11792): https://lists.fd.io/g/vpp-dev/message/11792
Mute This Topic: https://lists.fd.io/mt/28877871/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] found some issue in pci vfio

2018-12-28 Thread Yu, Ping
Hello, all

Recently I met some issue in vfio, and below is some root cause:

Before running vpp, I will use "dpdk-devbind.py" to bind the NIC to vfio-pci, 
and then use "uio-driver auto" configure, and it once worked well, but it has 
problem recently, so I took a look at this to resolve the problem.
I found that vpp has some problem to "auto" detect the uio-driver to be vfio, 
and the bug info is below:

1)  vlib_pci_bind_to_uio is dependent vlib_pci_get_device_info to tell 
iommu_group

2)  vlib_pci_get_device_info will check whether lvm->container_fd == -1

In my case, lvm->container_fd is initialized to be "-1", and there is no chance 
to modify it, so in the eye of vlib_pci_bind_to_uio, iommu_group is -1, then it 
will try to enable noiommu mode. If some kernel enabled NOIOMMU, then vfio can 
continue work with noiommu mode, but if not, then this device will be rejected 
due to "no VFIO" support.  (pci.c # 411. )

The solution is to drop "auto" mode, and explicitly configure "vfio-pci" in the 
configuration, but I hope the default "auto" mode can be smarter.

Ping

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11791): https://lists.fd.io/g/vpp-dev/message/11791
Mute This Topic: https://lists.fd.io/mt/28877871/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Question regarding captive portal

2018-12-28 Thread carlito nueno
NAT might be the right way to achieve this.

This is the command I used with iptables:
iptables -t nat -A eth0 -p tcp --dport 80 -j DNAT --to-destination 192.168.1.2

What is a similar command on VPP-NAT when I am trying to send port 80 traffic 
from main interface to tap-device:
main interface: GigabitEthernet4/0/0
tap id: 3 (tap3) with address 192.168.1.2 and host-if-name tapcap

Thanks
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11790): https://lists.fd.io/g/vpp-dev/message/11790
Mute This Topic: https://lists.fd.io/mt/28506160/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Failing (all?) Verify Jobs

2018-12-28 Thread Jon Loeliger
On Fri, Dec 28, 2018 at 9:25 AM Paul Vinciguerra 
wrote:

> Hi Jon,
>
> I opened a ticket (#66166) with the helpdesk on 12/25.
>

Ah, excellent.  Thanks!


> "Starting Dec. 14, 2018 through Jan. 1, 2019 the Linux Foundation will be
> operating with reduced staff in observance of our office closure.
> Responses to inbound support requests will be delayed, or otherwise
> addressed after Jan. 2, 2019."
>
> Please be patient.
>

Of course!

I just wasn't sure if "just rebase it" meant the root cause would hit the
floor or not.
With the ticket you opened, it's all good!

Thanks,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11789): https://lists.fd.io/g/vpp-dev/message/11789
Mute This Topic: https://lists.fd.io/mt/28866309/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Failing (all?) Verify Jobs

2018-12-28 Thread Paul Vinciguerra
Hi Jon,

I opened a ticket (#66166) with the helpdesk on 12/25.

"Starting Dec. 14, 2018 through Jan. 1, 2019 the Linux Foundation will be
operating with reduced staff in observance of our office closure.
Responses to inbound support requests will be delayed, or otherwise
addressed after Jan. 2, 2019."

Please be patient.



On Fri, Dec 28, 2018 at 9:10 AM Jon Loeliger  wrote:

> On Thu, Dec 27, 2018 at 1:11 PM Florin Coras 
> wrote:
>
>> Paul came up with a better fix [1]. Rebasing of the patches should solve
>> the verify problems now.
>>
>> Florin
>>
>
> Hi Florin,
>
> While that worked enough to verify my patch, it seems like a temporary
> solution.
> Is there a proper longer term fix here?
>
> Thanks,
> jdl
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11788): https://lists.fd.io/g/vpp-dev/message/11788
Mute This Topic: https://lists.fd.io/mt/28866309/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Question about duplicate nodes in the graph

2018-12-28 Thread Jon Loeliger
Folks,

Is there a valid use for duplicate nodes in the node graph?
Specifically, if I type this command three times:

vpp# set interface ip vxlan-bypass TenGigabitEthernet6/0/0
vpp# set interface ip vxlan-bypass TenGigabitEthernet6/0/0
vpp# set interface ip vxlan-bypass TenGigabitEthernet6/0/0

I end up with duplicate ip4-vxlan-nodes:

vpp# show interface features TenGigabitEthernet6/0/0
[ ...]
ip4-unicast:
  ip4-not-enabled
  ip4-vxlan-bypass
  ip4-vxlan-bypass
  ip4-vxlan-bypass

In this case, I don't think it makes sense, and I think we should detect
the presence of the node and disallow multiple additions.

If that is true for this case, but NOT true for other cases, we might
entertain
a flag (per interface per family) in the
function vnet_int_vxlan_bypass_mode()
to record its addition and prevent subsequent re-additions.

On the other hand, if duplicate nodes is a Bad Thing for the whole graph,
perhaps the function vnet_feature_enable_disable() itself should be able
to detect and prevent duplicates?

Thoughts?
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11787): https://lists.fd.io/g/vpp-dev/message/11787
Mute This Topic: https://lists.fd.io/mt/28873312/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] VPP Failing (all?) Verify Jobs

2018-12-28 Thread Jon Loeliger
On Thu, Dec 27, 2018 at 1:11 PM Florin Coras  wrote:

> Paul came up with a better fix [1]. Rebasing of the patches should solve
> the verify problems now.
>
> Florin
>

Hi Florin,

While that worked enough to verify my patch, it seems like a temporary
solution.
Is there a proper longer term fix here?

Thanks,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11786): https://lists.fd.io/g/vpp-dev/message/11786
Mute This Topic: https://lists.fd.io/mt/28866309/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-