Re: [vpp-dev] nat: specify a pool for an outgoing interface

2019-01-07 Thread Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES@Cisco) via Lists.Fd.Io
Address and port allocation function example https://gerrit.fd.io/r/#/c/14643/

Matus


From: khers 
Sent: Monday, January 7, 2019 4:13 PM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 

Cc: vpp-dev 
Subject: Re: [vpp-dev] nat: specify a pool for an outgoing interface

Dear Matue, Ole

OK, Is there any example to write address and port allocation function?
Do you have any plan to implement 'multiple outside interface' and 'support ACL 
before NAT'?

Regards,
Khers

On Mon, Jan 7, 2019 at 8:53 AM Matus Fabian -X (matfabia - PANTHEON 
TECHNOLOGIES at Cisco) mailto:matfa...@cisco.com>> wrote:
Hi,

Your requirement is not supported currently. Maybe you can implement it using 
NAT as output feature and write your own address and port allocation function.

Matus


From: khers mailto:s3m2e1.6s...@gmail.com>>
Sent: Sunday, January 6, 2019 3:35 PM
To: Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) 
mailto:matfa...@cisco.com>>
Cc: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: Re: [vpp-dev] nat: specify a pool for an outgoing interface

Dear Matus

Unfortunately we can not assign two vrf to an interface.
Is there another way to implement my requirement?

Regards,
Khers


On Wed, Jan 2, 2019 at 9:56 AM Matus Fabian -X (matfabia - PANTHEON 
TECHNOLOGIES at Cisco) mailto:matfa...@cisco.com>> wrote:
Hi,

You can translate to different addresses only packets from different VRF 
https://wiki.fd.io/view/VPP/NAT#NAT44_add_pool_address_for_specific_tenant

Matus


From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of emma sdi
Sent: Tuesday, January 1, 2019 9:10 AM
To: vpp-dev mailto:vpp-dev@lists.fd.io>>
Subject: [vpp-dev] nat: specify a pool for an outgoing interface

Dear VPP

I want to configure a simple nat topology with 3 interfaces:
===in  
==out
   GigabitEthernet3/0/0 
   GigabitEthernetb/0/0

 GigabitEthernet1b/0/0
I want to source nat all packets going out form GigabitEthernetb/0/0 to ip 
address of GigabitEthernetb/0/0, and source nat all packets going out from 
GigabitEthernet1b/0/0 to ip address of GigabitEthernet1b/0/0.
Here is my configs
The problem is with nat in which all packets are translated to same ip address!!
Is there any way to translate packets to different IP addresses if they are 
transmitted through different interfaces.
Here is output of show trace

Regards,
khers



-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11861): https://lists.fd.io/g/vpp-dev/message/11861
Mute This Topic: https://lists.fd.io/mt/28903613/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ethernet-input on master branch

2019-01-07 Thread Kingwel Xie
Many thanks for the sharing. Yes, as you pointed out, it might not worthwhile 
to do offload for vlan.

Regards,
Kingwel

From: Damjan Marion 
Sent: Monday, January 07, 2019 9:18 PM
To: Kingwel Xie 
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] ethernet-input on master branch




On 7 Jan 2019, at 12:38, Kingwel Xie 
mailto:kingwel@ericsson.com>> wrote:

thanks, Damjan, that is very clear. I checked the device-input and 
ethernet-input code, and yes I totally understand your point.

Two more questions, I can see you spend some time on avf plugin, do you think 
it will eventually become the main stream and replace the dpdk drivers some day?

I don't think we want to be in device driver business, unless device vendors 
pick it up,
but on other side it is good to have one native implementation and AVF is good 
candidate due to compatibility with future Intel cards and
quite simply communication channel with PF driver.

DPDK is suboptimal for our use case, and with native implementation it is easy 
to show that...


For vlan tagging, would you like to consider using the HW rx and tx offload to 
optimize the vlan sub-interface?

Question here is: is it really cheaper to parse dpdk rte_mbuf metadata to 
extract vlan or to simply parse ethernet header as we need to parse ethernet 
header anyway.

Currently code is optimised for untagged, but we simply store u64 of data which 
follows ethertype. It is just one load + store per packet
but allows us to optionally do vlan processing if we detect dot1q or dot1ad 
frame.

On the tx side, it is also questionable specially for L3 path, as we need to 
apply rewrite string anyway,
so only difference is do we memcpy 14, 18 or 22 bytes...

--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11860): https://lists.fd.io/g/vpp-dev/message/11860
Mute This Topic: https://lists.fd.io/mt/28940805/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] :: GRE tunnel dropping MPLS packets

2019-01-07 Thread Paul Vinciguerra
Hi Omer.

Would you be willing to help us out and provide a unit test that simulates this 
failure?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11859): https://lists.fd.io/g/vpp-dev/message/11859
Mute This Topic: https://lists.fd.io/mt/28966281/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] :: GRE tunnel dropping MPLS packets

2019-01-07 Thread Omer Majeed
Hi,

I'm running VPP on Centos 7 machine (say machine A), and running an
application on other centos 7 machine (say machine B).
I've made a GRE tunnel between those 2 machines.

vpp# show gre tunnel
[0] instance 0 src 192.168.17.10 dst 192.168.17.6 fib-idx 0 sw-if-idx 8
payload L3

Made that gre0 interface mpls enabled.
I added outgoing mpls routes in VPP for IPs on machine B

vpp# show ip fib table 2
192.168.100.4/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:47 buckets:1 uRPF:49 to:[0:0]]
[0] [@10]: mpls-label[2]:[25:64:0:eos]
[@1]: mpls via 0.0.0.0 gre0: mtu:9000
4500fe2f196ec0a8110ac0a811068847
  stacked-on:
[@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000
ac1f6b20498fdead00280800
192.168.100.5/32
  unicast-ip4-chain
  [@0]: dpo-load-balance: [proto:ip4 index:46 buckets:1 uRPF:47 to:[0:0]]
[0] [@10]: mpls-label[0]:[30:64:0:eos]
[@1]: mpls via 0.0.0.0 gre0: mtu:9000
4500fe2f196ec0a8110ac0a811068847
  stacked-on:
[@3]: ipv4 via 192.168.17.6 loop9000: mtu:9000
ac1f6b20498fdead00280800

For reverse traffic I've added MPLS routes given below

vpp# show mpls fib table 0
18:eos/21 fib:0 index:29 locks:2
  src:API refs:1 entry-flags:uRPF-exempt,
src-flags:added,contributing,active,
path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
  path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
[@0]: dst-address,unicast lookup in ipv4-VRF:2

 forwarding:   mpls-eos-chain
  [@0]: dpo-load-balance: [proto:mpls index:32 buckets:1 uRPF:32 to:[0:0]]
[0] [@6]: mpls-disposition:[0]:[ip4, pipe]
[@7]: dst-address,unicast lookup in ipv4-VRF:2
19:eos/21 fib:0 index:38 locks:2
  src:API refs:1 entry-flags:uRPF-exempt,
src-flags:added,contributing,active,
path-list:[35] locks:20 flags:shared, uPRF-list:31 len:0 itfs:[]
  path:[35] pl-index:35 ip4 weight=1 pref=0 deag:  oper-flags:resolved,
[@0]: dst-address,unicast lookup in ipv4-VRF:2

 forwarding:   mpls-eos-chain
  [@0]: dpo-load-balance: [proto:mpls index:41 buckets:1 uRPF:41 to:[0:0]]
[0] [@6]: mpls-disposition:[9]:[ip4, pipe]
[@7]: dst-address,unicast lookup in ipv4-VRF:2

When I try to ping from machine B to an IP in machine B (VPP VRF 2) through
that GRE tunnel, I receive packets but GRE tunnel drops the packets.
vpp# show int gre0
Name   Idx   State  Counter
Count
gre0  8 up   rx
packets66
 rx
bytes  6996

drops66

(nil)   66

Is there anything else that needs to be done to get MPLS over GRE working?
Any suggestions on how to debug the issue?

Thanks a lot.
Best Regards,
Omer
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11858): https://lists.fd.io/g/vpp-dev/message/11858
Mute This Topic: https://lists.fd.io/mt/28966281/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] An 'ip route' question

2019-01-07 Thread Jon Loeliger
Neale,

I have a question about the API field 'next_hop_table_id' within the
API call ip_add_del_route.  (There isn't a doc string for this field in
the API file, so guessing a bit.)  Is this field the same field for both
the CLI options 'next-hop-table' and 'ip4-lookup-in-table'?  Is there
a conceptual difference between these two CLI options, or are they
just different words for setting the same API field?  Is this merely
a mechanism for specifying the IPv[46] proto when an IP address
is not available to specify the proto?  (Also, note, CLI keyword
'lookup-in-vrf' as well.)

Thanks,
jdl
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11857): https://lists.fd.io/g/vpp-dev/message/11857
Mute This Topic: https://lists.fd.io/mt/28965063/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Reminder: VPP Release 19.01 F0 date is this Wednesday 9th January 2019

2019-01-07 Thread Dave Barach via Lists.Fd.Io
+1. 

Please bear in mind that patches submitted after the F0 date must be low-risk, 
or they'll have to wait until master reopens after the 19.01 release throttle 
branch pull. 

-Original Message-
From: vpp-dev@lists.fd.io  On Behalf Of Andrew Yourtchenko
Sent: Monday, January 7, 2019 10:48 AM
To: vpp-dev 
Subject: [vpp-dev] Reminder: VPP Release 19.01 F0 date is this Wednesday 9th 
January 2019

Dear all,

Just a reminder that the API Freeze (F0) date is this Wednesday, the 9th of 
January, as per release plan at
https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_19.01

Please ensure that if you have the patches with the API changes, you merge them 
before this deadline.

Thanks a lot!

--a
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11856): https://lists.fd.io/g/vpp-dev/message/11856
Mute This Topic: https://lists.fd.io/mt/28963818/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Reminder: VPP Release 19.01 F0 date is this Wednesday 9th January 2019

2019-01-07 Thread Andrew Yourtchenko
Dear all,

Just a reminder that the API Freeze (F0) date is this Wednesday, the
9th of January, as per release plan at
https://wiki.fd.io/view/Projects/vpp/Release_Plans/Release_Plan_19.01

Please ensure that if you have the patches with the API changes, you
merge them before this deadline.

Thanks a lot!

--a
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11855): https://lists.fd.io/g/vpp-dev/message/11855
Mute This Topic: https://lists.fd.io/mt/28963818/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] nat: specify a pool for an outgoing interface

2019-01-07 Thread emma sdi
Dear Matue, Ole

OK, Is there any example to write address and port allocation function?
Do you have any plan to implement 'multiple outside interface' and 'support
ACL before NAT'?

Regards,
Khers

On Mon, Jan 7, 2019 at 8:53 AM Matus Fabian -X (matfabia - PANTHEON
TECHNOLOGIES at Cisco)  wrote:

> Hi,
>
>
>
> Your requirement is not supported currently. Maybe you can implement it
> using NAT as output feature and write your own address and port allocation
> function.
>
>
>
> Matus
>
>
>
>
>
> *From:* khers 
> *Sent:* Sunday, January 6, 2019 3:35 PM
> *To:* Matus Fabian -X (matfabia - PANTHEON TECHNOLOGIES at Cisco) <
> matfa...@cisco.com>
> *Cc:* vpp-dev 
> *Subject:* Re: [vpp-dev] nat: specify a pool for an outgoing interface
>
>
>
> Dear Matus
>
>
>
> Unfortunately we can not assign two vrf to an interface.
>
> Is there another way to implement my requirement?
>
>
>
> Regards,
>
> Khers
>
>
>
>
>
> On Wed, Jan 2, 2019 at 9:56 AM Matus Fabian -X (matfabia - PANTHEON
> TECHNOLOGIES at Cisco)  wrote:
>
> Hi,
>
>
>
> You can translate to different addresses only packets from different VRF
> https://wiki.fd.io/view/VPP/NAT#NAT44_add_pool_address_for_specific_tenant
>
>
>
> Matus
>
>
>
>
>
> *From:* vpp-dev@lists.fd.io  *On Behalf Of *emma sdi
> *Sent:* Tuesday, January 1, 2019 9:10 AM
> *To:* vpp-dev 
> *Subject:* [vpp-dev] nat: specify a pool for an outgoing interface
>
>
>
> Dear VPP
>
>
>
> I want to configure a simple nat topology with 3 interfaces:
>
> ===in
> ==out
>
>
> GigabitEthernet3/0/0
> GigabitEthernetb/0/0
>
>
> GigabitEthernet1b/0/0
>
> I want to source nat all packets going out form GigabitEthernetb/0/0 to ip
> address of GigabitEthernetb/0/0, and source nat all packets going out from
> GigabitEthernet1b/0/0 to ip address of GigabitEthernet1b/0/0.
>
> Here is my configs 
>
> The problem is with nat in which all packets are translated to same ip
> address!!
>
> Is there any way to translate packets to different IP addresses if they
> are transmitted through different interfaces.
>
> Here is output of show trace 
>
>
>
> Regards,
>
> khers
>
>
>
>
>
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11854): https://lists.fd.io/g/vpp-dev/message/11854
Mute This Topic: https://lists.fd.io/mt/28903613/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] wwan0 intarface - vpp

2019-01-07 Thread amitmulayoff
Hello all , im new to vpp and just started to reseach about it

i was wandering regarding an issue , i want that the vpp will recognaize in his 
port list
a wwan0 intarface (wifi) that i have in the linux device.
it has its own pci address ,i can see it under sudo lshw -class network -businfo
pci@:01:00.0  wlan0      network    QCA986x/988x 802.11ac Wireless Network A

but the vpp dosent "take it"

can i make it happen? 

pls advice 
thanks alot
Am
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11853): https://lists.fd.io/g/vpp-dev/message/11853
Mute This Topic: https://lists.fd.io/mt/28963384/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ethernet-input on master branch

2019-01-07 Thread Damjan Marion via Lists.Fd.Io


> On 7 Jan 2019, at 12:38, Kingwel Xie  wrote:
> 
> thanks, Damjan, that is very clear. I checked the device-input and 
> ethernet-input code, and yes I totally understand your point.
> 
> Two more questions, I can see you spend some time on avf plugin, do you think 
> it will eventually become the main stream and replace the dpdk drivers some 
> day?

I don't think we want to be in device driver business, unless device vendors 
pick it up,
but on other side it is good to have one native implementation and AVF is good 
candidate due to compatibility with future Intel cards and 
quite simply communication channel with PF driver.

DPDK is suboptimal for our use case, and with native implementation it is easy 
to show that...

> 
> For vlan tagging, would you like to consider using the HW rx and tx offload 
> to optimize the vlan sub-interface?


Question here is: is it really cheaper to parse dpdk rte_mbuf metadata to 
extract vlan or to simply parse ethernet header as we need to parse ethernet 
header anyway.

Currently code is optimised for untagged, but we simply store u64 of data which 
follows ethertype. It is just one load + store per packet
but allows us to optionally do vlan processing if we detect dot1q or dot1ad 
frame.

On the tx side, it is also questionable specially for L3 path, as we need to 
apply rewrite string anyway,
so only difference is do we memcpy 14, 18 or 22 bytes...

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11852): https://lists.fd.io/g/vpp-dev/message/11852
Mute This Topic: https://lists.fd.io/mt/28940805/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ethernet-input on master branch

2019-01-07 Thread Kingwel Xie
thanks, Damjan, that is very clear. I checked the device-input and 
ethernet-input code, and yes I totally understand your point.

Two more questions, I can see you spend some time on avf plugin, do you think 
it will eventually become the main stream and replace the dpdk drivers some day?

For vlan tagging, would you like to consider using the HW rx and tx offload to 
optimize the vlan sub-interface?

Reagrds,
Kingwel


 原始邮件 
主题: Re: [vpp-dev] ethernet-input on master branch
来自: "Damjan Marion via Lists.Fd.Io" 
发至: 2019年1月7日 下午5:23
抄送: Kingwel Xie 


On 5 Jan 2019, at 04:55, Kingwel Xie 
mailto:kingwel@ericsson.com>> wrote:

Hi Damjan,

I noticed you removed the quick path from dpdk-input to ip-input/mpls-input, 
after you merged the patch of ethernet-input optimization. Therefore, all 
packets now have to go through ethernet-input. It would take a few more cpu 
clocks than before.

Please elaborate why making this change.

Dear Kingwei,

Old bypass code beside the fact that it was doing ethertype lookup in the 
device driver code which is architecturally wrong, was broken for some corner 
cases
(i.e. was not doing dMAC check when interface is in promisc mode or interface 
does't do dMAC check at all).
Also bypass code was not dealing properly with VLAN 0 packets.

Keeping things like that means that we will need to maintain separate ethertype 
lookup code i each vpp interface type (i.e. memif, vhost, avf).

With that patch ethertype lookup was moved to one natural place, which is 
ethernet-input node, and as you noticed
there is small cost of doing that (1-2 clocks in my setup).

So with this patch there is small perf hit for L3 untagged traffic, but also 
brings ~10 clocks improvement for L2 traffic. It also
improves L3 performance for memif and vhost-user interfaces.

In addition there is another patch on top of this one which improves tagged 
packet handling and reduces cost of VLAN single/double lookup from 70 clocks to 
less than 30.

Hope this explains,

--
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11851): https://lists.fd.io/g/vpp-dev/message/11851
Mute This Topic: https://lists.fd.io/mt/28940805/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] ethernet-input on master branch

2019-01-07 Thread Damjan Marion via Lists.Fd.Io


> On 5 Jan 2019, at 04:55, Kingwel Xie  wrote:
> 
> Hi Damjan,
>  
> I noticed you removed the quick path from dpdk-input to ip-input/mpls-input, 
> after you merged the patch of ethernet-input optimization. Therefore, all 
> packets now have to go through ethernet-input. It would take a few more cpu 
> clocks than before.
>  
> Please elaborate why making this change.

Dear Kingwei,

Old bypass code beside the fact that it was doing ethertype lookup in the 
device driver code which is architecturally wrong, was broken for some corner 
cases
(i.e. was not doing dMAC check when interface is in promisc mode or interface 
does't do dMAC check at all).
Also bypass code was not dealing properly with VLAN 0 packets.

Keeping things like that means that we will need to maintain separate ethertype 
lookup code i each vpp interface type (i.e. memif, vhost, avf).

With that patch ethertype lookup was moved to one natural place, which is 
ethernet-input node, and as you noticed 
there is small cost of doing that (1-2 clocks in my setup).

So with this patch there is small perf hit for L3 untagged traffic, but also 
brings ~10 clocks improvement for L2 traffic. It also 
improves L3 performance for memif and vhost-user interfaces.

In addition there is another patch on top of this one which improves tagged 
packet handling and reduces cost of VLAN single/double lookup from 70 clocks to 
less than 30.

Hope this explains,

-- 
Damjan

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11850): https://lists.fd.io/g/vpp-dev/message/11850
Mute This Topic: https://lists.fd.io/mt/28940805/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-