Re: [vpp-dev] Using custom openssl with vpp #vpp

2018-05-14 Thread Kingwel Xie
Hi,

We managed to link with openssl 1.1 successfully. The OS is ubuntu 16.04. 
Basically we downloaded v1.1, and built it. Then make some changes to vpp 
makefile pointing to the correct header and lib files.

V1.0.2 is still there for the other apps, but vpp is working with v1.1.

Regards,
Kingwel


From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Florin Coras
Sent: Monday, May 14, 2018 11:18 PM
To: duct...@viettel.com.vn
Cc: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] Using custom openssl with vpp #vpp

Hi DucTM,

Did you try changing src/plugin/tlsopenssl.am to link against openssl 1.1? I’ve 
never tried it, so no idea what the end result may be :-)

Florin


On May 14, 2018, at 3:52 AM, 
duct...@viettel.com.vn wrote:

Hi,
I'm trying to customize the openssl plugin that needs to work with openssl 1.1 
(with some modification also).
Applying the new openssl version to the system is not possible since there are 
some other apps rely on openssl, and they do not work with openssl 1.1.
Is there any configuration I can make to use vpp with a relative build openssl? 
Or just some idea about how to achieve that.
Any help will be highly appreciated.

DucTM




Re: [vpp-dev] Query on VPP behaviour when IP from same subnet configured on plain and vlan interface

2018-05-14 Thread Ole Troan
>  Thanks for the response. Any plans to differ this behaviour in future to 
> support multiple interfaces in the same subnet?

How do you intend for that to work?
(While IPv6 notionally has support for that, as far as I know no 
implementations support it.

Best regards,
Ole



-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9278): https://lists.fd.io/g/vpp-dev/message/9278
View All Messages In Topic (4): https://lists.fd.io/g/vpp-dev/topic/18621611
Mute This Topic: https://lists.fd.io/mt/18621611/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



signature.asc
Description: Message signed with OpenPGP


Re: [vpp-dev] Query on VPP behaviour when IP from same subnet configured on plain and vlan interface

2018-05-14 Thread bindiya Kurle
Hi  Neale,

 Thanks for the response. Any plans to differ this behaviour in future to
support multiple interfaces in the same subnet?

Regards,
Bindiya

On Mon, May 14, 2018 at 5:15 PM, Neale Ranns (nranns) 
wrote:

>
>
> VPP does not support multiple interfaces in the same subnet.
>
> Your scenario will be a configuration error once:
>
>   https://gerrit.fd.io/r/#/c/8057/
>
> is committed.
>
>
>
> /neale
>
>
>
> *From: * on behalf of bindiya Kurle <
> bindiyaku...@gmail.com>
> *Date: *Monday, 7 May 2018 at 07:27
> *To: *"vpp-dev@lists.fd.io" 
> *Subject: *[vpp-dev] Query on VPP behaviour when IP from same subnet
> configured on plain and vlan interface
>
>
>
> *Hi,*
>
>
>
> 13.0.0.200 ---|GigabitEthernet1/0/0 (plain
> interface)13.0.0.2
>
>   |
> GigabitEthernet1/0/0.111(vlan interface)13.0.0.5packet to send out
> destination IP (13.0.0.200)
>
>
>
> Fig 1.
>
>
>
>
>
> I am trying to configure two IP’s belonging to same subnet on plain and a
> VLAN interface(refer fig 1).While sending a packet, the ip4-lookup node is
> fetching the dpoi_index pertaining to the VLAN interface which in-turn
> gives the software index to VLAN interface in lookup.
>
> If I try same scenario on Linux ,ping to the same destination IP(IP:
> 13.0.0.200) works as kernel pick up the plain interface route since that
> is the 1st route in its routing table.
>
>
>
> *FIB table entry: *
>
> 13.0.0.2/32  pmtu: 0
>
>   unicast-ip4-chain
>
>   [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:16 to:[0:0]]
>
> [0] [@2]: dpo-receive: 13.0.0.2 on GigabitEthernet1/0/0
>
> 13.0.0.5/32  pmtu: 0
>
>   unicast-ip4-chain
>
>   [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:22 to:[0:0]]
>
> [0] [@2]: dpo-receive: 13.0.0.5 on GigabitEthernet1/0/0.111
>
> 13.0.0.200/32  pmtu: 0
>
>   UNRESOLVED
>
>
>
> Questions :
>
> 1. Is there any specific reason why VPP always returns last entry added
> for that prefix instead of 1st entry? Can Vpp behaviour be made similar to
> Linux kernel Behaviour?
>
>
>
>
>
> Regards,
>
> Bindiya
>
>
>
> 
>
>


Re: [vpp-dev] Using custom openssl with vpp #vpp

2018-05-14 Thread Florin Coras
Hi DucTM, 

Did you try changing src/plugin/tlsopenssl.am to link against openssl 1.1? I’ve 
never tried it, so no idea what the end result may be :-)

Florin

> On May 14, 2018, at 3:52 AM, duct...@viettel.com.vn wrote:
> 
> Hi,
> I'm trying to customize the openssl plugin that needs to work with openssl 
> 1.1 (with some modification also).
> Applying the new openssl version to the system is not possible since there 
> are some other apps rely on openssl, and they do not work with openssl 1.1.
> Is there any configuration I can make to use vpp with a relative build 
> openssl? Or just some idea about how to achieve that.
> Any help will be highly appreciated.
> 
> DucTM
> 



Re: [vpp-dev] Packet tx functions via DPDK

2018-05-14 Thread Prashant Upadhyaya
Thanks a bunch Nitin, your mail helps me connect the dots -- the thing
I was missing was the connection with ethernet_register_interface()
Cool browsing done by you !
Please do check my other mail too on the list (for frames) and it
would be great if we can drill down on that topic too.

Regards
-Prashant


On Fri, May 11, 2018 at 7:08 PM, Nitin Saxena  wrote:
> Hi Prashant,
>
> Hope you are doing fine.
>
> Regarding your question, I am not able to see macswap plugin in current
> master branch but I will try to explain wrt dpdk_plugin:
>
> With respect to low level device each VPP device driver registers for
>
> 1) INPUT_NODE (For Rx) VLIB_REGISTER_NODE (This you already figured out)
> 2) And Tx function via VNET_DEVICE_CLASS (), Device class like "dpdk"
> There are couple of more function pointers registered but let stick to Rx/Tx
> part.
>
> As part of startup low level plugin/driver calls
> ethernet_register_interface() which in turn calls vnet_register_interface().
>
> vnet_register_interface:
> For a particular interface like Intel 40G, init time interface node is
> created and the tx function of that node is copied from
> VNET_DEVICE_CLASS{.tx_function="}. node->tx and node->output functions
> are properly initialized and node is registered.
>
> VPP stack sends packet to this low level Tx node via sw_if_index. I am
> guessing sw_if_index is determined by IPv4 routing or L2 switching.
>
> I think vnet_set_interface_output_node() is called for those interface (Tx
> path) whose DEVICE_CLASS do not provide tx_functions but I am not sure.
>
> "show vlib graph" will tell you how nodes are arranged in vpp graph.
>
> To be specific for your question
>
>   next0 = hi0->output_node_next_index;
>
> output_node_next_index is the index of next node at which the current vector
> has to be copied. (Transition from one node to another along the graph)
>
> Note: All this I got through browsing code and if this information is not
> correct, I request VPP experts to correct it.
>
> Thanks,
> Nitin
>
>
> On Thursday 10 May 2018 02:19 PM, Prashant Upadhyaya wrote:
>>
>> Hi,
>>
>> I am trying to walk throught the code to see how the packet arrives
>> into the system at dpdk rx side and finally leaves it at the dpdk tx
>> side. I am using the context of the macswap sample plugin for this.
>>
>> It is clear to me that dpdk-input is a graph node and it is an 'input'
>> type graph node so it polls for the packets using dpdk functions. The
>> frame is then eventually passed to the sample plugin because the
>> sample plugin inserts itself at the right place. The sample plugin
>> queues the packets to the interface-output graph node.
>>
>> So now I check the interface-output graph node function.
>> I locate that in vpp/src/vnet/interface_output.c
>> So the dispatch function for the graph node is
>> vnet_per_buffer_interface_output
>> Here the interface-output node is queueing the packets to a next node
>> based on the following code --
>>
>>   hi0 =
>>  vnet_get_sup_hw_interface (vnm,
>> vnet_buffer (b0)->sw_if_index
>> [VLIB_TX]);
>>
>>next0 = hi0->output_node_next_index;
>>
>> Now I am a little lost, what is this output_node_next_index ? Which
>> graph node function is really called for really emitting the packet ?
>> Where exactly is this setup ?
>>
>> I do see that the actual dpdk tx burst function is called from
>> tx_burst_vector_internal, which itself is called from
>> dpdk_interface_tx (vpp/src/plugins/dpdk/device/device.c). But how the
>> code reaches the dpdk_interface_tx after the packets are queued from
>> interface-output graph node is not clear to me. If somebody could help
>> me connect the dots, that would be great.
>>
>> Regards
>> -Prashant
>>
>> 
>>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9275): https://lists.fd.io/g/vpp-dev/message/9275
View All Messages In Topic (3): https://lists.fd.io/g/vpp-dev/topic/19023164
Mute This Topic: https://lists.fd.io/mt/19023164/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-14 Thread Luca Muscariello (lumuscar)
Hi Florin,

Session enable does not help.
hping is using raw sockets so this must be the reason.

Luca



From: Florin Coras 
Date: Friday 11 May 2018 at 23:02
To: Luca Muscariello 
Cc: "vpp-dev@lists.fd.io" 
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Hi Luca,

Not really sure why the kernel is slow to reply to ping. Maybe it has to do 
with scheduling but it’s just guess work.

I’ve never tried hping. Let me see if I understand your scenario: while running 
iperf you tried to hping the stack and you got no rst back? Anything 
interesting in “sh error” counters? If iperf wasn’t running, did you first 
enable the stack with “session enable”?

Florin


On May 11, 2018, at 3:19 AM, Luca Muscariello 
> wrote:

Florin,

A few more comments about latency.
Some number in ms in the table below:

This is ping and iperf3 concurrent. In case of VPP it is vppctl ping.

Kernel w/ load   Kernel w/o load  VPP w/ load  VPP w/o load
Min.   :0.1920   Min.   :0.0610   Min.   :0.0573   Min.   :0.03480
1st Qu.:0.2330   1st Qu.:0.1050   1st Qu.:0.2058   1st Qu.:0.04640
Median :0.2450   Median :0.1090   Median :0.2289   Median :0.04880
Mean   :0.2458   Mean   :0.1153   Mean   :0.2568   Mean   :0.05096
3rd Qu.:0.2720   3rd Qu.:0.1290   3rd Qu.:0.2601   3rd Qu.:0.05270
Max.   :0.2800   Max.   :0.1740   Max.   :0.6926   Max.   :0.09420

In short: ICMP packets have a lower latency under load.
I could interpret this as du to vectorization maybe. Also the Linux kernel
is slower to reply to ping by x2 factor (system call latency?) 115us vs
50us in VPP. w/ load no difference. In this test Linux TCP is using TSO.

While trying to use hping  to have a latency sample w/ TCP instead of ICMP
we noticed that VPP TCP stack does not reply with a RST. So we don’t get
any sample. Is that expected behavior?

Thanks


Luca





From: Luca Muscariello >
Date: Thursday 10 May 2018 at 13:52
To: Florin Coras >
Cc: Luca Muscariello >, 
"vpp-dev@lists.fd.io" 
>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

MTU had no effect, just statistical fluctuations in the test reports. Sorry for 
misreporting the info.

We are exploiting vectorization as we have a single memif channel
per transport socket so we can control the size of the batches dynamically.

In theory the size of outstanding data from the transport should be controlled 
in bytes for
batching to be useful and not harmful as frame sizes can vary a lot. But I’m 
not aware of a queue abstraction from DPDK
to control that from VPP.

From: Florin Coras >
Date: Wednesday 9 May 2018 at 18:23
To: Luca Muscariello >
Cc: Luca Muscariello >, 
"vpp-dev@lists.fd.io" 
>
Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

Hi Luca,

We don’t yet support pmtu in the stack so tcp uses a fixed 1460 mtu, unless you 
changed that, we shouldn’t generate jumbo packets. If we do, I’ll have to take 
a look at it :)

If you already had your transport protocol, using memif is the natural way to 
go. Using the session layer makes sense only if you can implement your 
transport within vpp in a way that leverages vectorization or if it can 
leverage the existing transports (see for instance the TLS implementation).

Until today [1] the stack did allow for excessive batching (generation of 
multiple frames in one dispatch loop) but we’re now restricting that to one. 
This is still far from proper pacing which is on our todo list.

Florin

[1] https://gerrit.fd.io/r/#/c/12439/





On May 9, 2018, at 4:21 AM, Luca Muscariello (lumuscar) 
> wrote:

Florin,

Thanks for the slide deck, I’ll check it soon.

BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
little
advantage wrt the Linux TCP stack which was using 1500B by default.

By manually setting DPDK MTU to 1500B the goodput goes down to 8.5Gbps which 
compares
to 4.5Gbps for Linux w/o TSO. Also congestion window adaptation is not the same.

BTW, for what we’re doing it is difficult to reuse the VPP session layer as it 
is.
Our transport stack uses a different kind of namespace and mux/demux is also 
different.

We are using memif as underlying driver which does not seem to be a
bottleneck as we can also control batching there. Also, we have our own
shared memory downstream memif inside VPP through a plugin.

What we observed is that 

Re: [vpp-dev] ip4-not-enabled in IP-in-IP tunnel

2018-05-14 Thread Nitin Saxena

Thanks Ole. Makes sense. Let me try by assigning IP to ipip0

Thanks,
Nitin

On Monday 14 May 2018 06:24 PM, Ole Troan wrote:

Nitin,

A tunnel interface is just like any other interface and you need to have an IP 
address configured on it to make it IP enabled.
(Or point to another interface with IP unnumbered).

Note that the IPIP interface supports {IPvX over IPvY} where X and Y are 4 and 
6. So your patch would blindly enable IPv4, which isn't quite what you want.

Cheers,
Ole


On 14 May 2018, at 14:27, Nitin Saxena  wrote:

Hi,

Using VPP v1804 I created IP-in-IP tunnel and ran into IP4-not-enabled issue. 
Following is the trace

===
-- Start of thread 1 vpp_wk_0 ---
Packet 1

00:04:16:407330: dpdk-input
  VirtualFunctionEthernet1/0/1 rx queue 0
  buffer 0x291b: current data 14, length 48, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
 ext-hdr-valid
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14
  PKT MBUF: port 0, nb_segs 1, pkt_len 62
buf_len 2176, data_len 62, ol_flags 0x0, data_off 128, phys_addr 0x80148e80
packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
  IP4: 90:e2:ba:91:22:04 -> 00:0f:b7:11:8d:da
  IP_IN_IP: 50.0.0.1 -> 60.0.0.1
tos 0x00, ttl 64, length 48, checksum 0x0cc8
fragment id 0x0001
00:04:16:407343: ip4-input
  IP_IN_IP: 50.0.0.1 -> 60.0.0.1
tos 0x00, ttl 64, length 48, checksum 0x0cc8
fragment id 0x0001
00:04:16:407350: ip4-lookup
  fib 0 dpo-idx 7 flow hash: 0x
  IP_IN_IP: 50.0.0.1 -> 60.0.0.1
tos 0x00, ttl 64, length 48, checksum 0x0cc8
fragment id 0x0001
00:04:16:407354: ip4-local
IP_IN_IP: 50.0.0.1 -> 60.0.0.1
  tos 0x00, ttl 64, length 48, checksum 0x0cc8
  fragment id 0x0001
00:04:16:407356: ipip4-input
  IPIP: tunnel 3 len 48 src 50.0.0.1 dst 60.0.0.1
00:04:16:407358: ip4-input
  UDP: 20.0.0.3 -> 30.0.0.2
tos 0x00, ttl 64, length 28, checksum 0x48cc
fragment id 0x0001
  UDP: 53 -> 53
length 8, checksum 0xcd6f
00:04:16:407359: ip4-not-enabled
UDP: 20.0.0.3 -> 30.0.0.2
  tos 0x00, ttl 64, length 28, checksum 0x48cc
  fragment id 0x0001
UDP: 53 -> 53
  length 8, checksum 0xcd6f
00:04:16:407365: error-drop
===

However we are able to fix above issue by following patch:

diff --git a/src/vnet/ipip/ipip.c b/src/vnet/ipip/ipip.c
index 82c961c..d3bf9d9 100644
--- a/src/vnet/ipip/ipip.c
+++ b/src/vnet/ipip/ipip.c
@@ -468,6 +468,8 @@ ipip_add_tunnel (ipip_transport_t transport,
   t->fib_index = fib_index;
   t->sw_if_index = sw_if_index;

+  ip4_sw_interface_enable_disable (sw_if_index, 1);
+
   t->transport = transport;
   vec_validate_init_empty (gm->tunnel_index_by_sw_if_index, sw_if_index, ~0);
   gm->tunnel_index_by_sw_if_index[sw_if_index] = t_idx;
@@ -529,6 +531,7 @@ ipip_del_tunnel (u32 sw_if_index)
   if (t == NULL)
 return VNET_API_ERROR_NO_SUCH_ENTRY;

+  ip4_sw_interface_enable_disable (sw_if_index, 0);
   vnet_sw_interface_set_flags (vnm, sw_if_index, 0 /* down */ );
   gm->tunnel_index_by_sw_if_index[sw_if_index] = ~0;
   vnet_delete_hw_interface (vnm, t->hw_if_index);

Are we missing anything? Any comments will be appreciated.

Thanks,
Nitin










-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9273): https://lists.fd.io/g/vpp-dev/message/9273
View All Messages In Topic (3): https://lists.fd.io/g/vpp-dev/topic/19199319
Mute This Topic: https://lists.fd.io/mt/19199319/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] ip4-not-enabled in IP-in-IP tunnel

2018-05-14 Thread Nitin Saxena

Hi,

Using VPP v1804 I created IP-in-IP tunnel and ran into IP4-not-enabled 
issue. Following is the trace


===
-- Start of thread 1 vpp_wk_0 ---
Packet 1

00:04:16:407330: dpdk-input
  VirtualFunctionEthernet1/0/1 rx queue 0
  buffer 0x291b: current data 14, length 48, free-list 0, clone-count 
0, totlen-nifb 0, trace 0x0

 ext-hdr-valid
 l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14

  PKT MBUF: port 0, nb_segs 1, pkt_len 62
buf_len 2176, data_len 62, ol_flags 0x0, data_off 128, phys_addr 
0x80148e80

packet_type 0x0 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
  IP4: 90:e2:ba:91:22:04 -> 00:0f:b7:11:8d:da
  IP_IN_IP: 50.0.0.1 -> 60.0.0.1
tos 0x00, ttl 64, length 48, checksum 0x0cc8
fragment id 0x0001
00:04:16:407343: ip4-input
  IP_IN_IP: 50.0.0.1 -> 60.0.0.1
tos 0x00, ttl 64, length 48, checksum 0x0cc8
fragment id 0x0001
00:04:16:407350: ip4-lookup
  fib 0 dpo-idx 7 flow hash: 0x
  IP_IN_IP: 50.0.0.1 -> 60.0.0.1
tos 0x00, ttl 64, length 48, checksum 0x0cc8
fragment id 0x0001
00:04:16:407354: ip4-local
IP_IN_IP: 50.0.0.1 -> 60.0.0.1
  tos 0x00, ttl 64, length 48, checksum 0x0cc8
  fragment id 0x0001
00:04:16:407356: ipip4-input
  IPIP: tunnel 3 len 48 src 50.0.0.1 dst 60.0.0.1
00:04:16:407358: ip4-input
  UDP: 20.0.0.3 -> 30.0.0.2
tos 0x00, ttl 64, length 28, checksum 0x48cc
fragment id 0x0001
  UDP: 53 -> 53
length 8, checksum 0xcd6f
00:04:16:407359: ip4-not-enabled
UDP: 20.0.0.3 -> 30.0.0.2
  tos 0x00, ttl 64, length 28, checksum 0x48cc
  fragment id 0x0001
UDP: 53 -> 53
  length 8, checksum 0xcd6f
00:04:16:407365: error-drop
===

However we are able to fix above issue by following patch:

diff --git a/src/vnet/ipip/ipip.c b/src/vnet/ipip/ipip.c
index 82c961c..d3bf9d9 100644
--- a/src/vnet/ipip/ipip.c
+++ b/src/vnet/ipip/ipip.c
@@ -468,6 +468,8 @@ ipip_add_tunnel (ipip_transport_t transport,
   t->fib_index = fib_index;
   t->sw_if_index = sw_if_index;

+  ip4_sw_interface_enable_disable (sw_if_index, 1);
+
   t->transport = transport;
   vec_validate_init_empty (gm->tunnel_index_by_sw_if_index, 
sw_if_index, ~0);

   gm->tunnel_index_by_sw_if_index[sw_if_index] = t_idx;
@@ -529,6 +531,7 @@ ipip_del_tunnel (u32 sw_if_index)
   if (t == NULL)
 return VNET_API_ERROR_NO_SUCH_ENTRY;

+  ip4_sw_interface_enable_disable (sw_if_index, 0);
   vnet_sw_interface_set_flags (vnm, sw_if_index, 0 /* down */ );
   gm->tunnel_index_by_sw_if_index[sw_if_index] = ~0;
   vnet_delete_hw_interface (vnm, t->hw_if_index);

Are we missing anything? Any comments will be appreciated.

Thanks,
Nitin






-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9271): https://lists.fd.io/g/vpp-dev/message/9271
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/19199319
Mute This Topic: https://lists.fd.io/mt/19199319/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Query on VPP behaviour when IP from same subnet configured on plain and vlan interface

2018-05-14 Thread Neale Ranns

VPP does not support multiple interfaces in the same subnet.
Your scenario will be a configuration error once:
  https://gerrit.fd.io/r/#/c/8057/
is committed.

/neale

From:  on behalf of bindiya Kurle 
Date: Monday, 7 May 2018 at 07:27
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] Query on VPP behaviour when IP from same subnet configured 
on plain and vlan interface

Hi,

13.0.0.200 ---|GigabitEthernet1/0/0 (plain 
interface)13.0.0.2
  | 
GigabitEthernet1/0/0.111(vlan interface)13.0.0.5packet to send out 
destination IP (13.0.0.200)


Fig 1.


I am trying to configure two IP’s belonging to same subnet on plain and a VLAN 
interface(refer fig 1).While sending a packet, the ip4-lookup node is fetching 
the dpoi_index pertaining to the VLAN interface which in-turn gives the 
software index to VLAN interface in lookup.
If I try same scenario on Linux ,ping to the same destination IP(IP: 
13.0.0.200) works as kernel pick up the plain interface route since that is the 
1st route in its routing table.



FIB table entry:

13.0.0.2/32  pmtu: 0

  unicast-ip4-chain

  [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:16 to:[0:0]]

[0] [@2]: dpo-receive: 13.0.0.2 on GigabitEthernet1/0/0

13.0.0.5/32  pmtu: 0

  unicast-ip4-chain

  [@0]: dpo-load-balance: [proto:ip4 index:17 buckets:1 uRPF:22 to:[0:0]]

[0] [@2]: dpo-receive: 13.0.0.5 on GigabitEthernet1/0/0.111

13.0.0.200/32  pmtu: 0
  UNRESOLVED

Questions :
1. Is there any specific reason why VPP always returns last entry added for 
that prefix instead of 1st entry? Can Vpp behaviour be made similar to Linux 
kernel Behaviour?


Regards,
Bindiya




[vpp-dev] Using custom openssl with vpp #vpp

2018-05-14 Thread ductm18
Hi,
I'm trying to customize the openssl plugin that needs to work with openssl 1.1 
(with some modification also).
Applying the new openssl version to the system is not possible since there are 
some other apps rely on openssl, and they do not work with openssl 1.1.
Is there any configuration I can make to use vpp with a relative build openssl? 
Or just some idea about how to achieve that.
Any help will be highly appreciated.

DucTM


[vpp-dev] How to copy the SSH key to the Vagrant virtual machine

2018-05-14 Thread 汤超
According to the tutorial steps:
Copy your ssh-key to Vagrant VMs
This steps has to be repeated every time your Vagrant VMs are re-created (i.e. 
vagrant destroy command was issued)
echo csit@192.168.255.10{0,1,2} | xargs -n 1 ssh-copy-id

Respond with "csit" as password (without quotes). From now on you have 
password-less access from this account to csit@vagrant-vms via SSH.

Source of the 
tutorials:https://wiki.fd.io/view/CSIT/Tutorials/Vagrant/Virtualbox/Ubuntu

The following print appears:
(env) root@ubuntu:~# echo csit@192.168.255.10{0,1,2} | xargs -n 1  ssh-copy-id 
-f

/usr/bin/ssh-copy-id: ERROR: failed to open ID file '/root/.pub': No such file

/usr/bin/ssh-copy-id: ERROR: failed to open ID file '/root/.pub': No such file

/usr/bin/ssh-copy-id: ERROR: failed to open ID file '/root/.pub': No such file

This is my catalog:
(env) root@ubuntu:~# cd /root/
(env) root@ubuntu:~# ll
total 68
drwx-- 10 root root 4096 May 14 00:55 ./
drwxr-xr-x 24 root root 4096 May 13 17:58 ../
-rw---  1 root root 4055 May 14 00:01 .bash_history
-rw-r--r--  1 root root 3106 Oct 22  2015 .bashrc
drwx--  3 root root 4096 May 13 18:50 .cache/
drwx--  4 root root 4096 May 13 19:35 .config/
drwx--  3 root root 4096 May 10 23:13 .dbus/
drwx--  2 root root 4096 May 11 00:28 .gnupg/
drwxr-xr-x  2 root root 4096 May 14 00:55 key_backup/
-rw-r--r--  1 root root  148 Aug 17  2015 .profile
drwx--  2 root root 4096 May 13 23:59 .ssh/
-rw---  1 root root 1766 May 14 00:55 vagrant
drwxr-xr-x  7 root root 4096 May 14 00:16 .vagrant.d/
-rw-r--r--  1 root root  404 May 14 00:55 vagrant.pub
-rw---  1 root root 5465 May 14 00:17 .viminfo
drwx--  5 root root 4096 May 13 21:07 VirtualBox VMs/

What should I do? Do you have any suggestions to solve it? Thank you!




nwnj...@fiberhome.com


Re: [vpp-dev] question about the VCL

2018-05-14 Thread Edward Warnicke
Xyxue,

If you want to move raw IP/Ethernet around, I'd suggest looking at memif :)

Ed

On Mon, May 14, 2018 at 3:44 AM Florin Coras  wrote:

> Hi Xyxue,
>
> No, the stack does not support IPPROTO_RAW. Given that this is a user
> space stack and that you have access to things like memif, may I ask what
> use case you would need that for?
>
> Florin
>
>
> On May 14, 2018, at 12:58 AM, xyxue  wrote:
>
>
> Hi guys,
>
> Is the VCL support RAW_IP now ? Or is there a plan to support it?
>
> Thanks,
> Xyxue
>
>
> 
>


Re: [vpp-dev] question about the VCL

2018-05-14 Thread Florin Coras
Hi Xyxue, 

No, the stack does not support IPPROTO_RAW. Given that this is a user space 
stack and that you have access to things like memif, may I ask what use case 
you would need that for?

Florin

> On May 14, 2018, at 12:58 AM, xyxue  wrote:
> 
> 
> Hi guys,
> 
> Is the VCL support RAW_IP now ? Or is there a plan to support it? 
> 
> Thanks,
> Xyxue
> 



Re: [vpp-dev] TCP performance - TSO - HW offloading in general.

2018-05-14 Thread Florin Coras
Hi Luca, 

That is most probably the reason. We don’t support raw sockets. 

Florin

> On May 14, 2018, at 1:21 AM, Luca Muscariello (lumuscar)  
> wrote:
> 
> Hi Florin,
>
> Session enable does not help.
> hping is using raw sockets so this must be the reason.
>
> Luca
>
>
>
> From: Florin Coras 
> Date: Friday 11 May 2018 at 23:02
> To: Luca Muscariello 
> Cc: "vpp-dev@lists.fd.io" 
> Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.
>
> Hi Luca,
>
> Not really sure why the kernel is slow to reply to ping. Maybe it has to do 
> with scheduling but it’s just guess work. 
>
> I’ve never tried hping. Let me see if I understand your scenario: while 
> running iperf you tried to hping the stack and you got no rst back? Anything 
> interesting in “sh error” counters? If iperf wasn’t running, did you first 
> enable the stack with “session enable”?
>
> Florin
> 
> 
>> On May 11, 2018, at 3:19 AM, Luca Muscariello > > wrote:
>>
>> Florin,
>>
>> A few more comments about latency.
>> Some number in ms in the table below:
>>
>> This is ping and iperf3 concurrent. In case of VPP it is vppctl ping.
>>
>> Kernel w/ load   Kernel w/o load  VPP w/ load  VPP w/o load
>> Min.   :0.1920   Min.   :0.0610   Min.   :0.0573   Min.   :0.03480
>> 1st Qu.:0.2330   1st Qu.:0.1050   1st Qu.:0.2058   1st Qu.:0.04640
>> Median :0.2450   Median :0.1090   Median :0.2289   Median :0.04880
>> Mean   :0.2458   Mean   :0.1153   Mean   :0.2568   Mean   :0.05096
>> 3rd Qu.:0.2720   3rd Qu.:0.1290   3rd Qu.:0.2601   3rd Qu.:0.05270
>> Max.   :0.2800   Max.   :0.1740   Max.   :0.6926   Max.   :0.09420
>>
>> In short: ICMP packets have a lower latency under load.
>> I could interpret this as du to vectorization maybe. Also the Linux kernel
>> is slower to reply to ping by x2 factor (system call latency?) 115us vs
>> 50us in VPP. w/ load no difference. In this test Linux TCP is using TSO.
>>
>> While trying to use hping  to have a latency sample w/ TCP instead of ICMP 
>> we noticed that VPP TCP stack does not reply with a RST. So we don’t get
>> any sample. Is that expected behavior?
>>
>> Thanks
>>
>>
>> Luca
>>
>>
>>
>>
>>
>> From: Luca Muscariello >
>> Date: Thursday 10 May 2018 at 13:52
>> To: Florin Coras >
>> Cc: Luca Muscariello > >, "vpp-dev@lists.fd.io 
>> " > >
>> Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.
>>
>> MTU had no effect, just statistical fluctuations in the test reports. Sorry 
>> for misreporting the info.
>>
>> We are exploiting vectorization as we have a single memif channel 
>> per transport socket so we can control the size of the batches dynamically. 
>>
>> In theory the size of outstanding data from the transport should be 
>> controlled in bytes for 
>> batching to be useful and not harmful as frame sizes can vary a lot. But I’m 
>> not aware of a queue abstraction from DPDK 
>> to control that from VPP.
>>
>> From: Florin Coras >
>> Date: Wednesday 9 May 2018 at 18:23
>> To: Luca Muscariello >
>> Cc: Luca Muscariello > >, "vpp-dev@lists.fd.io 
>> " > >
>> Subject: Re: [vpp-dev] TCP performance - TSO - HW offloading in general.
>>
>> Hi Luca,
>>
>> We don’t yet support pmtu in the stack so tcp uses a fixed 1460 mtu, unless 
>> you changed that, we shouldn’t generate jumbo packets. If we do, I’ll have 
>> to take a look at it :)
>>
>> If you already had your transport protocol, using memif is the natural way 
>> to go. Using the session layer makes sense only if you can implement your 
>> transport within vpp in a way that leverages vectorization or if it can 
>> leverage the existing transports (see for instance the TLS implementation).
>>
>> Until today [1] the stack did allow for excessive batching (generation of 
>> multiple frames in one dispatch loop) but we’re now restricting that to one. 
>> This is still far from proper pacing which is on our todo list. 
>>
>> Florin
>>
>> [1] https://gerrit.fd.io/r/#/c/12439/ 
>>
>> 
>> 
>> 
>> 
>>> On May 9, 2018, at 4:21 AM, Luca Muscariello (lumuscar) >> > wrote:
>>>
>>> Florin,
>>>
>>> Thanks for the slide deck, I’ll check it soon.
>>>
>>> BTW, VPP/DPDK test was using jumbo frames by default so the TCP stack had a 
>>> little
>>> advantage wrt the Linux TCP stack which was using 1500B by default.
>>>
>>> By 

[vpp-dev] question about the VCL

2018-05-14 Thread xyxue

Hi guys,

Is the VCL support RAW_IP now ? Or is there a plan to support it? 

Thanks,
Xyxue