Re: [vpp-dev] Muliticore Performance test

2018-11-21 Thread kyunghwan kim
Korian,

Thanks for your reply,
I solved the problem.

Previously num-mbufs is the default,
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 7938 allocated = 8446 total =
16384
name = "dpdk_mbuf_pool_socket 1" available = 16384 allocated = 0 total =
16384
vpp #

Increase num-mbufs in startup.conf
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 119552 allocated = 8448 total
= 128000
name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
128000
vpp #

When packets are flowed at 40 Gbps / 64 bytes
vpp # show dpdk buffer
name = "dpdk_mbuf_pool_socket 0" available = 102069 allocated = 25776 total
= 127845
name = "dpdk_mbuf_pool_socket 1" available = 128000 allocated = 0 total =
128000
vpp #

I found out that buffer is missing.
Thank you so much.

Regards,
Kyunghwan Kim


2018년 11월 21일 (수) 오후 9:29, korian edeline 님이 작성:

> Hello,
>
> On 11/21/18 1:10 PM, kyunghwan kim wrote:
> > rx-no-buf  1128129034176
>
>
> You should be able to fix this particular problem by increasing
> num-mbufs in startup.conf, you can check the allocation with vpp# sh
> dpdk buffer
>
>
> > rx-miss951486596
>
> This is probably another problem.
>
>
> Cheers,
>
> Korian
>
>

-- 

キム、キョンファン
Tel : 080-3600-2306
E-mail : gpi...@gmail.com

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11360): https://lists.fd.io/g/vpp-dev/message/11360
Mute This Topic: https://lists.fd.io/mt/28276317/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Request: please add "real" pcap ability #vpp

2018-11-21 Thread brian . peter . dickson
Hi, dev folks,

Apologies for my first message being kind of demanding.

However, I think this is a reasonable request.

What I am interested in, and I think this is likely to be a fairly universal 
desire, is the ability to properly integrate some kind of pcap packet capture 
to the full VPP graph.

The current available mechanisms (pcap drop trace and pcap tx trace) do not 
apply to packets that are only "handled" by the host in question, i.e. neither 
originate or terminate on the local host.

In particular, I'm interested in something that can run on a bare metal host 
and, presuming sufficient resources can be given to it (cores, memory, etc), do 
packet capture at line rate.

Thus, any restriction ("run it on a VM") is not adequate.

Given that there is already stuff for handling the pcap file already (in 
vnet/unix IIRC), this should not be a lot of work.

There are two use cases I have:

* debugging data plane stuff on a vpp-based router (i.e. using the vppsb 
netlink and router projects)
* packet capture at line rate (a vpp host that only listens/drops traffic, 
incidental to the packet capture, i.e. a single-purpose host, bypassing 
kernel/driver limitations, to take all ethernet traffic on a port and stuff it 
into a pcap file.)

* NB: for scaling purposes, it is reasonable to implement the pcap captures 
using RSS/RFS to multiple cores and having each core be a thread doing pcap 
file writing; how that would be put into the "vpp graph" might be a little less 
than trivial, but should be straightforward, IMHO)

Thanks in advance.

Brian Dickson

P.S. There is a SERIOUS lack of useful documentation on how to actually do 
this, as a potential ad-hoc contributor. Not sure if you guys have gotten this 
feedback from anyone else.
P.P.S. I'm using 18.07 because that is the last version that builds alongside 
the vppsb netlink and router plugins.
P.P.P.S. Even getting 18.07 and vppsb to build was a nightmare. You should try 
doing this from scratch, i.e. put yourselves in the shoes of someone who just 
discovered vpp...
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11359): https://lists.fd.io/g/vpp-dev/message/11359
Mute This Topic: https://lists.fd.io/mt/28282785/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Build failing on Fedora

2018-11-21 Thread Stephen Hemminger
Build is failing for me on latest up to date version of Fedora and VPP.

This is a clean new VM, and did the install of dependencies but Makefile
is still confused.

$ make build
Please install missing RPMs: \npackage cmake3 is not installed\n
by executing "make install-dep"\n


Also looks like echo -e "..." is necessary here.
Weird that Debian uses shell builtin echo, and Fedora uses /bin/echo.
Annoying that both versions have different semaintics.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11358): https://lists.fd.io/g/vpp-dev/message/11358
Mute This Topic: https://lists.fd.io/mt/28281426/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


答复: [vpp-dev] about sctp

2018-11-21 Thread 刘道林
Hi Marco,



Thanks for your reply.



Actually I'm trying to read the sctp code recently to want to know how it 
works. My requirement is very simply. I need to run sctp server based on vpp on 
one VM, and run sctp client without vpp (I have this now) on the other VM. As 
my below email, there will be aborted (core dumped), and I found the place 
yesterday, but I don’t know the root reason. You can see below picture:



[cid:image003.jpg@01D4817F.A6117A60]



The sw_if_index is -1, so crashed when get mtu. Maybe I miss something? When I 
run actually is CLI command from echo_server.c. As you mentioned, if I run 
test_sctp.py, it will work fine? My requirement is only C language as it will 
merged some other code, so I don’t like to run it with python.



Yesterday, I also try to remove this mtu code, and the crash was disappeared 
although the sw_if_index is still -1, and then I start sctp client on the other 
VM to send INIT, but the server with vpp seemed be dead (no crash but cannot 
enter any command, seemed dead loop).



Anyway, I think it's better to try CLI "test echo server" in your side and find 
and fix all issues visually.











Best regards

刘道林 (Daolin Liu)

T大连市共进科技有限公司

DALIAN GONGJIN TECHNOLOGY CO.,LTD

中国大连市高新园区软件园路1A-4-24层

Floor 24th, 1A-4 Software Park Road, Hi-tech Zone, Dalian, Liaoning, China

直线(TEL):(86-411)39996705   分机(EXT):76824

手机(Mobile):(86)13704090959



-邮件原件-
发件人: Marco Varlese [mailto:mvarl...@suse.de]
发送时间: 2018年11月21日 1:06
收件人: Liu Daolin (刘道林); vpp-dev@lists.fd.io
主题: Re: [vpp-dev] about sctp



Hi,



On Tue, 2018-11-20 at 02:50 +, Liu Daolin (刘道林) wrote:

> Hi,

>

> I encountered below Aborted (core dumped) issue:

>

>

>

> When I run "test echo server uri", it's ok for tcp, but crashed immediately

> for sctp.

>

> Please try this in your side and give me advice. I use 18.10. Thanks!

Can you please try to run "make test TEST=test_sctp"?

>

>

> Best regards

> 刘道林 (Daolin Liu)

> T大连市共进科技有限公司

> DALIAN GONGJIN TECHNOLOGY CO.,LTD

> 中国大连市高新园区软件园路1A-4-24层

> Floor 24th, 1A-4 Software Park Road, Hi-tech Zone, Dalian, Liaoning, China

> 直线(TEL):(86-411)39996705   分机(EXT):76824

> 手机(Mobile):(86)13704090959

>

> 发件人: Liu Daolin (刘道林)

> 发送时间: 2018年11月19日 19:23

> 收件人: 'vpp-dev@lists.fd.io'

> 主题: about sctp

>

> Hi,

>

> I'd like to know some information about sctp.

>

> Is this fully functional? Or just partly?

There are missing pieces to the SCTP implementations.



> Actually, I want to try simply with CLI to verify sctp basic functions now.

> But it seems that there is no CLI, and the binary APIs are also imperfect.

What do you actually mean by imperfect? Any input (e.g. patch submission) would

be greatly appreciated!

>

> Do you have any plan in the next release? Including the sample test.

I am planning to create some JIRA ticket(s) so that people can see what's

missing and contribute if they like. Would you be interested?



>

> Best regards

> 刘道林 (Daolin Liu)

> T大连市共进科技有限公司

> DALIAN GONGJIN TECHNOLOGY CO.,LTD

> 中国大连市高新园区软件园路1A-4-24层

> Floor 24th, 1A-4 Software Park Road, Hi-tech Zone, Dalian, Liaoning, China

> 直线(TEL):(86-411)39996705   分机(EXT):76824

> 手机(Mobile):(86)13704090959

>

> 本电子邮件(包括任何的附件)为本公司保密文件。本文件仅仅可为以上指定的收件人或公司使用,如果阁下非电子邮件所指定之收件人,那么阁下对该邮件部分或全部的泄漏、

> 阅览、复印、变更、散布或对邮件内容的使用都是被严格禁止的。如果阁下接收了该错误传送的电子邮件,敬请阁下通过回复该邮件的方式立即通知寄件人,同时删除你所接收到

> 的文本。 This e-mail may contain confidential and/or privileged information. If

> you are not the intended recipient (or have received this e-mail in error)

> please notify the sender immediately and destroy this e-mail. Any unauthorized

> copying, disclosure or distribution of the material in this e-mail is strictly

> forbidden.

> -=-=-=-=-=-=-=-=-=-=-=-

> Links: You receive all messages sent to this group.

>

> View/Reply Online (#11334): https://lists.fd.io/g/vpp-dev/message/11334

> Mute This Topic: https://lists.fd.io/mt/28240674/675056

> Group Owner: vpp-dev+ow...@lists.fd.io

> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [mvarl...@suse.de]

> -=-=-=-=-=-=-=-=-=-=-=-

本电子邮件(包括任何的附件)为本公司保密文件。本文件仅仅可为以上指定的收件人或公司使用,如果阁下非电子邮件所指定之收件人,那么阁下对该邮件部分或全部的泄漏、阅览、复印、变更、散布或对邮件内容的使用都是被严格禁止的。如果阁下接收了该错误传送的电子邮件,敬请阁下通过回复该邮件的方式立即通知寄件人,同时删除你所接收到的文本。
 This e-mail may contain confidential and/or privileged information. If you are 
not the intended recipient (or have received this e-mail in error) please 
notify the sender immediately and destroy this e-mail. Any unauthorized 
copying, disclosure or distribution of the material in this e-mail is strictly 
forbidden.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11357): https://lists.fd.io/g/vpp-dev/message/11357
Mute This Topic: https://lists.fd.io/mt/28280869/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] How to actively close the client connections in http server? #vnet

2018-11-21 Thread Florin Coras


> On Nov 21, 2018, at 8:55 AM, Andreas Schultz  
> wrote:
> 
> Florin Coras mailto:fcoras.li...@gmail.com>> schrieb 
> am Mi., 21. Nov. 2018 um 17:18 Uhr:
> Hi Andreas, 
> 
> The trace lower suggests your tcp connection was freed but you still got data 
> for it. tcp-input and the checks in established should prevent that from 
> happening and the session layer should not receive any events after the 
> transport notifies it that the session will cleaned up. 
> 
> So, apart from calling vnet_disconnect_session () on an rx event, have you 
> done anything else to the code?
> 
> Not to the session code. I got a bit creative when creating the listening 
> session and feeding the packets to ip-input.
> 
> The actual code for the session process is here: 
> https://gerrit.fd.io/r/c/15801/6/src/plugins/upf/upf_http_redirect_server.c#222
>  
> 
>  , I still need to address you comments there, but I don't think that this 
> has anything to do with the crash.
> 
> I'm going to test this a bit further with the http_server.

Ack. Do let me know what you find!

Cheers, 
Florin

> 
> Andreas
> 
> 
> Regards,
> Florin
> 
> 
>> On Nov 21, 2018, at 1:36 AM, Andreas Schultz > > wrote:
>> 
>> Florin Coras mailto:fcoras.li...@gmail.com>> 
>> schrieb am Fr., 18. Mai 2018 um 08:12 Uhr:
>> That http server is just example code that executes the contents of a get 
>> request as a cli commands within a spawned vpp process. So, if you want to 
>> disconnect _after_ the the reply is sent, call vnet_disconnect_session () at 
>> the end of http_cli_process.
>> 
>> Came across this when searching for a similar problem.
>> 
>> I tried exactly what Florin suggested in the rx_callback handle. Doing so 
>> result in segmentation in multiple places.
>> 
>> The first in tcp_input:
>> 
>> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
>> 0x770d5b25 in tcp_handle_postponed_dequeues (wrk=0x7fffb5ac5f00) at 
>> /usr/src/vpp/src/vnet/tcp/tcp_input.c:530
>> 530tc->flags &= ~TCP_CONN_DEQ_PENDING;
>> (gdb) print tc
>> $1 = (tcp_connection_t *) 0x0
>> (gdb) bt
>> #0  0x770d5b25 in tcp_handle_postponed_dequeues (wrk=0x7fffb5ac5f00) 
>> at /usr/src/vpp/src/vnet/tcp/tcp_input.c:530
>> #1  0x770db8bb in tcp46_established_inline (vm=0x768f2340 
>> , node=0x7fffb6dc0600, frame=0x7fffb5e1afc0, is_ip4=1) at 
>> /usr/src/vpp/src/vnet/tcp/tcp_input.c:2160
>> #2  0x770db951 in tcp4_established (vm=0x768f2340 
>> , node=0x7fffb6dc0600, from_frame=0x7fffb5e1afc0) at 
>> /usr/src/vpp/src/vnet/tcp/tcp_input.c:2171
>> #3  0x76669ab2 in dispatch_node (vm=0x768f2340 
>> , node=0x7fffb6dc0600, type=VLIB_NODE_TYPE_INTERNAL, 
>> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb5e1afc0, 
>> last_time_stamp=6761417783745271)
>> at /usr/src/vpp/src/vlib/main.c:1109
>> #4  0x7666a092 in dispatch_pending_node (vm=0x768f2340 
>> , pending_frame_index=10, 
>> last_time_stamp=6761417783745271) at /usr/src/vpp/src/vlib/main.c:1267
>> #5  0x7666bd23 in vlib_main_or_worker_loop (vm=0x768f2340 
>> , is_main=1) at /usr/src/vpp/src/vlib/main.c:1694
>> #6  0x7666c4fc in vlib_main_loop (vm=0x768f2340 
>> ) at /usr/src/vpp/src/vlib/main.c:1768
>> #7  0x7666d2a9 in vlib_main (vm=0x768f2340 , 
>> input=0x7fffb53fffb0) at /usr/src/vpp/src/vlib/main.c:1961
>> #8  0x766c7d2d in thread0 (arg=140737329963840) at 
>> /usr/src/vpp/src/vlib/unix/main.c:606
>> #9  0x75ee2e04 in clib_calljmp () from 
>> /usr/src/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.01
>> #10 0x7fffd0f0 in ?? ()
>> #11 0x766c81fa in vlib_unix_main (argc=44, argv=0x558939b0) at 
>> /usr/src/vpp/src/vlib/unix/main.c:675
>> #12 0xcce8 in main (argc=44, argv=0x558939b0) at 
>> /usr/src/vpp/src/vpp/vnet/main.c:272
>> 
>> After adding a check for tc == NULL, it crashes with
>> 
>> 0: /usr/src/vpp/src/vnet/session/session.h:394 (session_get_from_handle) 
>> assertion `! pool_is_free (smm->wrk[thread_index].sessions, _e)' fails
>> 
>> So it seems that is currently not possible to use vnet_disconnect_session () 
>> from a rx_callback directly.
>> 
>> Any hints on how to disconnect the tcp session from the rx callback?
>> 
>> Regards
>> Andreas
>> 
>> 
>> Florin
>> 
>> 
>>> On May 17, 2018, at 10:52 PM, muziding >> > wrote:
>>> 
>>> Hi
>>> 
>>> I want to make the example of http server  actively close the client 
>>> connection, instead of waiting for the client to close connection, after 
>>> http server has responded  to the client request. What should I do?
>> 
>> _._,_._,_
>> Links:
>> 
>> You receive all messages sent to this group.
>> 
>> 
>> View/Reply Online (#9328)  | 
>> Reply To Sender 
>> 

Re: [vpp-dev] How to actively close the client connections in http server? #vnet

2018-11-21 Thread Andreas Schultz
Florin Coras  schrieb am Mi., 21. Nov. 2018 um
17:18 Uhr:

> Hi Andreas,
>
> The trace lower suggests your tcp connection was freed but you still got
> data for it. tcp-input and the checks in established should prevent that
> from happening and the session layer should not receive any events after
> the transport notifies it that the session will cleaned up.
>
> So, apart from calling vnet_disconnect_session () on an rx event, have you
> done anything else to the code?
>

Not to the session code. I got a bit creative when creating the listening
session and feeding the packets to ip-input.

The actual code for the session process is here:
https://gerrit.fd.io/r/c/15801/6/src/plugins/upf/upf_http_redirect_server.c#222
, I
still need to address you comments there, but I don't think that this has
anything to do with the crash.

I'm going to test this a bit further with the http_server.

Andreas


> Regards,
> Florin
>
>
> On Nov 21, 2018, at 1:36 AM, Andreas Schultz <
> andreas.schu...@travelping.com> wrote:
>
> Florin Coras  schrieb am Fr., 18. Mai 2018 um
> 08:12 Uhr:
>
>> That http server is just example code that executes the contents of a get
>> request as a cli commands within a spawned vpp process. So, if you want to
>> disconnect _after_ the the reply is sent, call vnet_disconnect_session ()
>> at the end of http_cli_process.
>>
>
> Came across this when searching for a similar problem.
>
> I tried exactly what Florin suggested in the rx_callback handle. Doing so
> result in segmentation in multiple places.
>
> The first in tcp_input:
>
> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
> 0x770d5b25 in tcp_handle_postponed_dequeues (wrk=0x7fffb5ac5f00)
> at /usr/src/vpp/src/vnet/tcp/tcp_input.c:530
> 530   tc->flags &= ~TCP_CONN_DEQ_PENDING;
> (gdb) print tc
> $1 = (tcp_connection_t *) 0x0
> (gdb) bt
> #0  0x770d5b25 in tcp_handle_postponed_dequeues
> (wrk=0x7fffb5ac5f00) at /usr/src/vpp/src/vnet/tcp/tcp_input.c:530
> #1  0x770db8bb in tcp46_established_inline (vm=0x768f2340
> , node=0x7fffb6dc0600, frame=0x7fffb5e1afc0, is_ip4=1) at
> /usr/src/vpp/src/vnet/tcp/tcp_input.c:2160
> #2  0x770db951 in tcp4_established (vm=0x768f2340
> , node=0x7fffb6dc0600, from_frame=0x7fffb5e1afc0) at
> /usr/src/vpp/src/vnet/tcp/tcp_input.c:2171
> #3  0x76669ab2 in dispatch_node (vm=0x768f2340
> , node=0x7fffb6dc0600, type=VLIB_NODE_TYPE_INTERNAL,
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb5e1afc0,
> last_time_stamp=6761417783745271)
> at /usr/src/vpp/src/vlib/main.c:1109
> #4  0x7666a092 in dispatch_pending_node (vm=0x768f2340
> , pending_frame_index=10,
> last_time_stamp=6761417783745271) at /usr/src/vpp/src/vlib/main.c:1267
> #5  0x7666bd23 in vlib_main_or_worker_loop (vm=0x768f2340
> , is_main=1) at /usr/src/vpp/src/vlib/main.c:1694
> #6  0x7666c4fc in vlib_main_loop (vm=0x768f2340
> ) at /usr/src/vpp/src/vlib/main.c:1768
> #7  0x7666d2a9 in vlib_main (vm=0x768f2340 ,
> input=0x7fffb53fffb0) at /usr/src/vpp/src/vlib/main.c:1961
> #8  0x766c7d2d in thread0 (arg=140737329963840) at
> /usr/src/vpp/src/vlib/unix/main.c:606
> #9  0x75ee2e04 in clib_calljmp () from
> /usr/src/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.01
> #10 0x7fffd0f0 in ?? ()
> #11 0x766c81fa in vlib_unix_main (argc=44, argv=0x558939b0) at
> /usr/src/vpp/src/vlib/unix/main.c:675
> #12 0xcce8 in main (argc=44, argv=0x558939b0) at
> /usr/src/vpp/src/vpp/vnet/main.c:272
>
> After adding a check for tc == NULL, it crashes with
>
> 0: /usr/src/vpp/src/vnet/session/session.h:394 (session_get_from_handle)
> assertion `! pool_is_free (smm->wrk[thread_index].sessions, _e)' fails
>
> So it seems that is currently not possible to use vnet_disconnect_session
> () from a rx_callback directly.
>
> Any hints on how to disconnect the tcp session from the rx callback?
>
> Regards
> Andreas
>
>
>> Florin
>>
>>
>> On May 17, 2018, at 10:52 PM, muziding  wrote:
>>
>> Hi
>>
>> I want to make the example of http server  actively close the client
>> connection, instead of waiting for the client to close connection, after
>> http server has responded  to the client request. What should I do?
>>
>>
>> _._,_._,_
>> --
>> Links:
>>
>> You receive all messages sent to this group.
>>
>> View/Reply Online (#9328)  | 
>> Reply
>> To Sender
>> 
>>  | Reply To Group
>> 
>>  | Mute This Topic  | New Topic
>> 
>>
>>
>>
>> Mute #vnet 
>>
>> Change Your Subscription 
>> Group Home 
>> Contact Group Owner 
>> Terms Of Service 
>> Unsubscribe From This Group 

Re: [vpp-dev] How to actively close the client connections in http server? #vnet

2018-11-21 Thread Florin Coras
Hi Andreas, 

The trace lower suggests your tcp connection was freed but you still got data 
for it. tcp-input and the checks in established should prevent that from 
happening and the session layer should not receive any events after the 
transport notifies it that the session will cleaned up. 

So, apart from calling vnet_disconnect_session () on an rx event, have you done 
anything else to the code?

Regards,
Florin

> On Nov 21, 2018, at 1:36 AM, Andreas Schultz  
> wrote:
> 
> Florin Coras mailto:fcoras.li...@gmail.com>> schrieb 
> am Fr., 18. Mai 2018 um 08:12 Uhr:
> That http server is just example code that executes the contents of a get 
> request as a cli commands within a spawned vpp process. So, if you want to 
> disconnect _after_ the the reply is sent, call vnet_disconnect_session () at 
> the end of http_cli_process.
> 
> Came across this when searching for a similar problem.
> 
> I tried exactly what Florin suggested in the rx_callback handle. Doing so 
> result in segmentation in multiple places.
> 
> The first in tcp_input:
> 
> Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
> 0x770d5b25 in tcp_handle_postponed_dequeues (wrk=0x7fffb5ac5f00) at 
> /usr/src/vpp/src/vnet/tcp/tcp_input.c:530
> 530 tc->flags &= ~TCP_CONN_DEQ_PENDING;
> (gdb) print tc
> $1 = (tcp_connection_t *) 0x0
> (gdb) bt
> #0  0x770d5b25 in tcp_handle_postponed_dequeues (wrk=0x7fffb5ac5f00) 
> at /usr/src/vpp/src/vnet/tcp/tcp_input.c:530
> #1  0x770db8bb in tcp46_established_inline (vm=0x768f2340 
> , node=0x7fffb6dc0600, frame=0x7fffb5e1afc0, is_ip4=1) at 
> /usr/src/vpp/src/vnet/tcp/tcp_input.c:2160
> #2  0x770db951 in tcp4_established (vm=0x768f2340 
> , node=0x7fffb6dc0600, from_frame=0x7fffb5e1afc0) at 
> /usr/src/vpp/src/vnet/tcp/tcp_input.c:2171
> #3  0x76669ab2 in dispatch_node (vm=0x768f2340 
> , node=0x7fffb6dc0600, type=VLIB_NODE_TYPE_INTERNAL, 
> dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb5e1afc0, 
> last_time_stamp=6761417783745271)
> at /usr/src/vpp/src/vlib/main.c:1109
> #4  0x7666a092 in dispatch_pending_node (vm=0x768f2340 
> , pending_frame_index=10, last_time_stamp=6761417783745271) 
> at /usr/src/vpp/src/vlib/main.c:1267
> #5  0x7666bd23 in vlib_main_or_worker_loop (vm=0x768f2340 
> , is_main=1) at /usr/src/vpp/src/vlib/main.c:1694
> #6  0x7666c4fc in vlib_main_loop (vm=0x768f2340 
> ) at /usr/src/vpp/src/vlib/main.c:1768
> #7  0x7666d2a9 in vlib_main (vm=0x768f2340 , 
> input=0x7fffb53fffb0) at /usr/src/vpp/src/vlib/main.c:1961
> #8  0x766c7d2d in thread0 (arg=140737329963840) at 
> /usr/src/vpp/src/vlib/unix/main.c:606
> #9  0x75ee2e04 in clib_calljmp () from 
> /usr/src/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.01
> #10 0x7fffd0f0 in ?? ()
> #11 0x766c81fa in vlib_unix_main (argc=44, argv=0x558939b0) at 
> /usr/src/vpp/src/vlib/unix/main.c:675
> #12 0xcce8 in main (argc=44, argv=0x558939b0) at 
> /usr/src/vpp/src/vpp/vnet/main.c:272
> 
> After adding a check for tc == NULL, it crashes with
> 
> 0: /usr/src/vpp/src/vnet/session/session.h:394 (session_get_from_handle) 
> assertion `! pool_is_free (smm->wrk[thread_index].sessions, _e)' fails
> 
> So it seems that is currently not possible to use vnet_disconnect_session () 
> from a rx_callback directly.
> 
> Any hints on how to disconnect the tcp session from the rx callback?
> 
> Regards
> Andreas
> 
> 
> Florin
> 
> 
>> On May 17, 2018, at 10:52 PM, muziding > > wrote:
>> 
>> Hi
>> 
>> I want to make the example of http server  actively close the client 
>> connection, instead of waiting for the client to close connection, after 
>> http server has responded  to the client request. What should I do?
> 
> _._,_._,_
> Links:
> 
> You receive all messages sent to this group.
> 
> 
> View/Reply Online (#9328)  | 
> Reply To Sender 
> 
>  | Reply To Group 
> 
>  | Mute This Topic  | New Topic 
> 
> 
> 
> 
> Mute #vnet 
> 
> Change Your Subscription 
> Group Home 
> Contact Group Owner 
> Terms Of Service 
> Unsubscribe From This Group 
> _._,_._,_
> 
> 
> 
> 
> -- 
> -- 
> Dipl.-Inform. Andreas Schultz
> 
> --- enabling your networks 

[vpp-dev] Questions about SIDs

2018-11-21 Thread Yosh Kikuchi
Hi VPP-dev,

I have a few questions about SIDs.
I have configured SRv6 network(no IGP) for test purpose, and it works fine.
But I am a little confused about variety of SIDs.

To create an SID, I use the command "sr localsid address".
Is this what we call a "prefix sid" or a "node sid" ?
How do I configure an "adjacency sid" ?

Regards,
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11352): https://lists.fd.io/g/vpp-dev/message/11352
Mute This Topic: https://lists.fd.io/mt/28277900/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] FD.io CSIT-18.10.w47 Weekly Maintenance Report Update

2018-11-21 Thread Maciek Konstantynowicz (mkonstan) via Lists.Fd.Io
FD.io CSIT-18.10.w47 weekly maintenance report has been published on
FD.io docs site:

 html: https://docs.fd.io/csit/rls1810/report/
 pdf: https://docs.fd.io/csit/rls1810/report/_static/archive/csit_rls1810.pdf

Changes from last week .w46 version:

1. Added automated wrapping of long test names in legend to graphs.
2. Updated data and time format in the header.
3. Updated report versioning.
4. Added more test runs:
   a. HoneyComb Functional.
   b. VPP on 3n-hsw testbed.
5. Added more Current vs. Previous release performance comparisons:
   a. VPP 3n-skx: http://bit.ly/2R18ypw
   b. VPP 2n-skx: http://bit.ly/2A6RCqo
   c. DPDK 3n-skx: http://bit.ly/2Q6QVrp
6. Added performance comparisons between testbed types:
   a. VPP: 2n-skx vs. 3n-skx: http://bit.ly/2FzC84q
   b. DPDK: 2n-skx vs. 3n-skx: http://bit.ly/2OX9rxx
7. Added results for 2n-dnv Atom Denverton tests:
   a. VPP Packet Throughput: http://bit.ly/2Q8wdaA
   b. VPP MRR: http://bit.ly/2FwSj2b

All per section links listed below stay unchanged.

FD.io VPP points of note in the report:

 1. VPP release notes
a. Changes in CSIT-18.10: http://bit.ly/2OxSUA6
b. Known issues: http://bit.ly/2REvB9D

 2. VPP performance graphs
a. Throughput: http://bit.ly/2OwQPEn
b. Speedup Multi-Core: http://bit.ly/2zztqgi
c. Latency: http://bit.ly/2F9j4K6

 3. VPP performance comparisons
a. VPP-18.10 vs. VPP-18.07: http://bit.ly/2RGBvXM
b. 3-Node Skylake vs. 3-Node Haswell testbeds: http://bit.ly/2DqjrOJ
c. 2-Node Skylake vs. 3-Node Skylake testbedds: http://bit.ly/2FzC84q

 4. VPP performance test details
a. Detailed results: http://bit.ly/2yVWMGf
b. Configuration: http://bit.ly/2qy7w8U
c. Run-time telemetry: http://bit.ly/2PQlkKw

DPDK Testpmd and L3fwd performance sections follow similar structure.

 1. DPDK release notes:
   a. Changes in CSIT-18.10: http://bit.ly/2Oy7E1U
   b. Known issues: http://bit.ly/2RGGE22

 2. DPDK performance graphs for Testpmd and L3fwd demonstrator apps
   a. Throughput: http://bit.ly/2PeSlAz
   b. Latency: http://bit.ly/2z5OSdx

 3. DPDK performance comparisons
a. DPDK-18.08 vs. DPDK-18.05: http://bit.ly/2Q6QVrp

Functional tests, including VPP_Device (functional device tests),
VPP_VIRL, HoneyComb, NSH_SFC and DMM are all included in the report.

Welcome all comments, best by email to csit-...@lists.fd.io.

Cheers,
-Maciek

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11351): https://lists.fd.io/g/vpp-dev/message/11351
Mute This Topic: https://lists.fd.io/mt/28277551/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Tests to Python 3

2018-11-21 Thread Ole Troan
Hi,

With Python 2.7 finally being deprecated in 2020. I did an initial try at 
running the test framework in Python 3.

https://gerrit.fd.io/r/#/c/16099/

Mainly isseus with str/bytes. At least I got some of the tests running, but 
many are still failing.
E.g. test_lisp.py depends on a unmaintained external library that does not 
support Python 3 at all.

It’s not going to be a massive amount of work, but I’d appreciate some help.
Also, after ltrying this, my recommendation would be to do a flag day. When 
16099 is committed, tests will be on Python 3, and until then Python 2.7.
Supporting both at the same time is going to be more pain than it’s worth. Feel 
free to disagree on this point.

Cheers,
Ole-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11350): https://lists.fd.io/g/vpp-dev/message/11350
Mute This Topic: https://lists.fd.io/mt/28277345/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

2018-11-21 Thread Neale Ranns via Lists.Fd.Io

Hi Chetan,

The null-node should not be encountered under normal operation. The null-node 
always has an index/value of 0, therefore if the previous node has not been 
properly configured, or the arc taken from that node was wrong, then the packet 
can likely end up at the null-node.
To debug this I would suggest you run a debug image and enable the packet 
trajectory tracer (grep VLIB_BUFFER_TRACE_TRAJECTORY) so you can see where 
these packets originate from.

Regards,
Neale

De :  au nom de chetan bhasin 
Date : mercredi 21 novembre 2018 à 06:16
À : "Dave Barach (dbarach)" 
Cc : "vpp-dev@lists.fd.io" 
Objet : Re: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

Hi Dave,

Thanks a lot.

one more query ,what is the purpose of null_node ? and when this scenario will 
be provoked that null_node is hit?

Thanks,
Chetan Bhasin

On Tue, Nov 20, 2018 at 10:57 PM Dave Barach (dbarach) 
mailto:dbar...@cisco.com>> wrote:
See 
https://wiki.fd.io/view/VPP/Pulling,_Building,_Running,_Hacking_and_Pushing_VPP_Code#Pushing_Code_with_git_review

From: chetan bhasin 
mailto:chetan.bhasin...@gmail.com>>
Sent: Tuesday, November 20, 2018 11:43 AM
To: Dave Barach (dbarach) mailto:dbar...@cisco.com>>
Subject: Re: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

Thanks Dave!

I will try with DEBUG too.

Just want to understand the procedure to check in the patches,  actually we 
have done several fixes in VPP,  so we are planning to Check-in all patches.

Thanks,
Chetan Bhasin

On Tue, Nov 20, 2018, 18:02 Dave Barach (dbarach) 
mailto:dbar...@cisco.com> wrote:
Several suggestions:

· Try a debug image (PLATFORM=vpp TAG=vpp_debug) so the crash will be 
more enlightening
· Switch to 18.10. 18.01 is no longer supported. We don’t use the 
mheap.c memory allocator anymore, and so on and so forth.
· See https://wiki.fd.io/view/VPP/BugReports


From: vpp-dev@lists.fd.io 
mailto:vpp-dev@lists.fd.io>> On Behalf Of chetan bhasin
Sent: Tuesday, November 20, 2018 5:31 AM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] Getting crash while running load on VPP18.01 for 6 hours

Hi Vpp-dev,

We are facing issues while running load for ~6 hours . getting below crash.

Your Suggestion is really appreciated.


#1  0x2b00b990e8f8 in __GI_abort () at abort.c:90
#2  0x00405f23 in os_panic () at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vpp/vnet/main.c:268
#3  0x2b00b8d60710 in mheap_put (v=0x2b00ba3d8000, uoffset=2382207096) at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mheap.c:798
#4  0x2b00b8d8959e in clib_mem_free (p=0x2b00c8ba84a0) at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/mem.h:213
#5  vec_resize_allocate_memory (v=, 
length_increment=length_increment@entry=1, data_bytes=, 
header_bytes=, header_bytes@entry=0, 
data_align=data_align@entry=4) at 
/bfs-build/build-area.49/builds/LinuxNBngp_7.X_RH7/2018-11-16-0505/third-party/vpp/vpp_1801/build-data/../src/vppinfra/vec.c:96
#6  0x2b00b79e899d in _vec_resize (data_align=, 
header_bytes=, data_bytes=, 
length_increment=, v=) at 
/nfs-bfs/workspace/build-data/../src/vppinfra/vec.h:142
#7  get_frame_size_info (n_scalar_bytes=, 
n_vector_bytes=, nm=0x2b00c87a3160, nm=0x2b00c87a3160) at 
/nfs-bfs/workspace//build-data/../src/vlib/main.c:107
#8  0x2b00b79e8d79 in vlib_frame_free (vm=vm@entry=0x2b00c87a3050, 
r=r@entry=0x2b00c86ca368, f=f@entry=0x2b014b2ecb80) at 
/nfs-bfs//vpp_1801/build-data/../src/vlib/main.c:221
#9  0x2b00b79fe6e6 in null_node_fn (vm=0x2b00c87a3050, node=0x2b00c86ca368, 
frame=0x2b014b2ecb80) at /nfs-bfs/workspace/build-data/../src/vlib/node.c:512

Thanks,
Chetan
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11349): https://lists.fd.io/g/vpp-dev/message/11349
Mute This Topic: https://lists.fd.io/mt/28266210/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Muliticore Performance test

2018-11-21 Thread ko

Hello,

On 11/21/18 1:10 PM, kyunghwan kim wrote:

rx-no-buf  1128129034176



You should be able to fix this particular problem by increasing 
num-mbufs in startup.conf, you can check the allocation with vpp# sh 
dpdk buffer




rx-miss    951486596


This is probably another problem.


Cheers,

Korian

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11348): https://lists.fd.io/g/vpp-dev/message/11348
Mute This Topic: https://lists.fd.io/mt/28276317/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Muliticore Performance test

2018-11-21 Thread kyunghwan kim
Hello everyone,

I'm trying to test the performance of Multicore at VPP,
The Topology of test is simple,
   TG - DUT - TG

- The Specification of DTU -
CPU   Intel(R) Xeon(R) CPU E5-2699A v4 @ 2.40GHz
NIC   XL710 40GbE QSFP+
Linux version   Ubuntu 16.04.3 LTS
VPP Verion  v18.07.1-release
DPDK Version  DPDK 18.05.0

test Traffic  Bi-direction traffic(40Gbps)
-

There is no problem until the 2Core/port.
It will be a bug, scaleout from 3Core/port.
The One port is not working.

Please look at the example below.
In terms of "show interface", the RX packets count of one port is not
counted.
but,
In term of "show hardware-interfaces",
All ports receive packets.(RX packets count is counted.)
Then TX packets count is half of the rx packets count.
Do you have a solution to this problem?

example 4Core/port>>

vpp# show interface rx-placement
Thread 1 (vpp_wk_0):
  node dpdk-input:
FortyGigabitEthernet3/0/0 queue 0 (polling)
Thread 2 (vpp_wk_1):
  node dpdk-input:
FortyGigabitEthernet3/0/0 queue 1 (polling)
Thread 3 (vpp_wk_2):
  node dpdk-input:
FortyGigabitEthernet3/0/0 queue 2 (polling)
Thread 4 (vpp_wk_3):
  node dpdk-input:
FortyGigabitEthernet3/0/0 queue 3 (polling)
Thread 5 (vpp_wk_4):
  node dpdk-input:
FortyGigabitEthernet4/0/0 queue 0 (polling)
Thread 6 (vpp_wk_5):
  node dpdk-input:
FortyGigabitEthernet4/0/0 queue 1 (polling)
Thread 7 (vpp_wk_6):
  node dpdk-input:
FortyGigabitEthernet4/0/0 queue 2 (polling)
Thread 8 (vpp_wk_7):
  node dpdk-input:
FortyGigabitEthernet4/0/0 queue 3 (polling)
vpp#


vpp# show interface
  Name   IdxState  MTU (L3/IP4/IP6/MPLS)
Counter  Count
FortyGigabitEthernet3/0/0 1  up  9000/0/0/0 rx
packets  4060
rx
bytes  503376
tx
packets   41944643369
tx
bytes   5201135777674

drops  4

ip4 4059

rx-no-buf  1128129034176

rx-miss   3251542862
FortyGigabitEthernet4/0/0 2  up  9000/0/0/0 rx
packets   83904541699
rx
bytes  10404163170612
tx
packets   41954781733
tx
bytes   5202392934810

drops5120655

ip4  83904541698

rx-no-buf   486471833920

rx-miss951486596
local00 down  0/0/0/0
vpp#


vpp# show hardware-interfaces
  NameIdx   Link  Hardware
FortyGigabitEthernet3/0/0  1 up   FortyGigabitEthernet3/0/0
  Ethernet address 3c:fd:fe:9f:ca:a0
  Intel X710/XL710 Family
carrier up full duplex speed 4 mtu 9202
flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
rx queues 4, rx desc 1024, tx queues 21, tx desc 1024
cpu socket 0
tx frames ok 41944643369
tx bytes ok5201135777692
rx frames ok 81604382684
rx bytes ok   10522134767640
rx missed 3251542862
rx no bufs 1607594119200
extended stats:
  rx good packets81604382684
  tx good packets41944643369
  rx good bytes   10522134767640
  tx good bytes5201135777692
  rx missed errors3251542862
  rx mbuf allocation errors1607594119200
  rx unicast packets 84855925546
  rx unknown protocol packets 3251546922
  tx unicast packets 41944643368
  tx broadcast packets 1
  rx size 64 packets   1
  rx size 128 to 255 packets 84855925545
  tx size 64 packets   1
  tx size 65 to 127 packets  293
  tx size 128 to 255 packets 41944643368
FortyGigabitEthernet4/0/0  2 up   FortyGigabitEthernet4/0/0
  Ethernet address 3c:fd:fe:9f:ca:58
  Intel X710/XL710 Family
carrier up full duplex speed 4 mtu 9202
flags: admin-up pmd maybe-multiseg tx-offload intel-phdr-cksum
rx queues 4, rx desc 1024, tx queues 21, tx desc 1024
cpu socket 0
tx frames ok 41954781733
tx bytes ok

Re: [vpp-dev] How to actively close the client connections in http server? #vnet

2018-11-21 Thread Andreas Schultz
Florin Coras  schrieb am Fr., 18. Mai 2018 um
08:12 Uhr:

> That http server is just example code that executes the contents of a get
> request as a cli commands within a spawned vpp process. So, if you want to
> disconnect _after_ the the reply is sent, call vnet_disconnect_session ()
> at the end of http_cli_process.
>

Came across this when searching for a similar problem.

I tried exactly what Florin suggested in the rx_callback handle. Doing so
result in segmentation in multiple places.

The first in tcp_input:

Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x770d5b25 in tcp_handle_postponed_dequeues (wrk=0x7fffb5ac5f00) at
/usr/src/vpp/src/vnet/tcp/tcp_input.c:530
530   tc->flags &= ~TCP_CONN_DEQ_PENDING;
(gdb) print tc
$1 = (tcp_connection_t *) 0x0
(gdb) bt
#0  0x770d5b25 in tcp_handle_postponed_dequeues
(wrk=0x7fffb5ac5f00) at /usr/src/vpp/src/vnet/tcp/tcp_input.c:530
#1  0x770db8bb in tcp46_established_inline (vm=0x768f2340
, node=0x7fffb6dc0600, frame=0x7fffb5e1afc0, is_ip4=1) at
/usr/src/vpp/src/vnet/tcp/tcp_input.c:2160
#2  0x770db951 in tcp4_established (vm=0x768f2340
, node=0x7fffb6dc0600, from_frame=0x7fffb5e1afc0) at
/usr/src/vpp/src/vnet/tcp/tcp_input.c:2171
#3  0x76669ab2 in dispatch_node (vm=0x768f2340
, node=0x7fffb6dc0600, type=VLIB_NODE_TYPE_INTERNAL,
dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fffb5e1afc0,
last_time_stamp=6761417783745271)
at /usr/src/vpp/src/vlib/main.c:1109
#4  0x7666a092 in dispatch_pending_node (vm=0x768f2340
, pending_frame_index=10,
last_time_stamp=6761417783745271) at /usr/src/vpp/src/vlib/main.c:1267
#5  0x7666bd23 in vlib_main_or_worker_loop (vm=0x768f2340
, is_main=1) at /usr/src/vpp/src/vlib/main.c:1694
#6  0x7666c4fc in vlib_main_loop (vm=0x768f2340
) at /usr/src/vpp/src/vlib/main.c:1768
#7  0x7666d2a9 in vlib_main (vm=0x768f2340 ,
input=0x7fffb53fffb0) at /usr/src/vpp/src/vlib/main.c:1961
#8  0x766c7d2d in thread0 (arg=140737329963840) at
/usr/src/vpp/src/vlib/unix/main.c:606
#9  0x75ee2e04 in clib_calljmp () from
/usr/src/vpp/build-root/install-vpp_debug-native/vpp/lib/libvppinfra.so.19.01
#10 0x7fffd0f0 in ?? ()
#11 0x766c81fa in vlib_unix_main (argc=44, argv=0x558939b0) at
/usr/src/vpp/src/vlib/unix/main.c:675
#12 0xcce8 in main (argc=44, argv=0x558939b0) at
/usr/src/vpp/src/vpp/vnet/main.c:272

After adding a check for tc == NULL, it crashes with

0: /usr/src/vpp/src/vnet/session/session.h:394 (session_get_from_handle)
assertion `! pool_is_free (smm->wrk[thread_index].sessions, _e)' fails

So it seems that is currently not possible to use vnet_disconnect_session
() from a rx_callback directly.

Any hints on how to disconnect the tcp session from the rx callback?

Regards
Andreas


> Florin
>
>
> On May 17, 2018, at 10:52 PM, muziding  wrote:
>
> Hi
>
> I want to make the example of http server  actively close the client
> connection, instead of waiting for the client to close connection, after
> http server has responded  to the client request. What should I do?
>
>
> _._,_._,_
> --
> Links:
>
> You receive all messages sent to this group.
>
> View/Reply Online (#9328)  | Reply
> To Sender
> 
> | Reply To Group
> 
> | Mute This Topic  | New Topic
> 
>
>
> Mute #vnet 
>
> Change Your Subscription 
> Group Home 
> Contact Group Owner 
> Terms Of Service 
> Unsubscribe From This Group 
> _._,_._,_
>
> --
-- 
Dipl.-Inform. Andreas Schultz

--- enabling your networks --
Travelping GmbH Phone:  +49-391-81 90 99 0
Roentgenstr. 13 Fax:+49-391-81 90 99 299
39108 Magdeburg Email:  i...@travelping.com
GERMANY Web:http://www.travelping.com

Company Registration: Amtsgericht StendalReg No.:   HRB 10578
Geschaeftsfuehrer: Holger Winkelmann  VAT ID No.: DE236673780
-
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11346): https://lists.fd.io/g/vpp-dev/message/11346
Mute This Topic: https://lists.fd.io/mt/19343385/21656
Mute #vnet: https://lists.fd.io/mk?hashtag=vnet=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-