[vpp-dev] VPP's QoS functions

2018-04-23 Thread Masaaki Teshigawara
hello all,

I'd like to know the usecases and usage of the following commands because I 
think they are related to QoS function.
Are there any usecases or operation samples?
- policer : https://docs.fd.io/vpp/18.07/clicmd_src_vnet_policer.html
- qos : https://docs.fd.io/vpp/18.07/clicmd_src_vnet_qos.html

Thank you in advance.

Best regards,
Teshigawara



-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9038): https://lists.fd.io/g/vpp-dev/message/9038
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/18055275
Mute This Topic: https://lists.fd.io/mt/18055275/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Draft VPP 18.04 release notes

2018-04-23 Thread Chris Luke
All,

A first pass at the release notes for VPP 18.04 are available for review in 
https://gerrit.fd.io/r/#/c/12038/ . I have not yet walked the commit history 
looking for juicy key features to include in the list, I'll do that tomorrow. 
Suggestions are, of course, welcome!

A preview can be seen at 
http://brae.flirble.org/~chrisy/vpp/d0/dde/release_notes_1804.html .

Cheers,
Chris.


Re: [vpp-dev] VLAN to VLAN

2018-04-23 Thread carlito nueno
any suggestions?

Thanks


Re: [vpp-dev] Question of worker thread handoff

2018-04-23 Thread Kingwel Xie
Hi Damjan,

I would say there is nothing we can do but to drop the packets. Developers have 
to take care of how to use handoff mechanism. In our case or NAT case, it has 
to be ‘no wait’, while it could be ‘wait forever’ in a stream line mode – 
worker A -> B -> C.

BTW, my colleague Lollita thought there might be some improvement we can do to 
the handoff queue. Please Lollita share what you found.

Regards,
Kingwel

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, April 23, 2018 8:32 PM
To: Kingwel Xie 
Cc: Ole Troan ; vpp-dev ; Lollita 
Liu 
Subject: Re: [vpp-dev] Question of worker thread handoff


Dear Kingwel,

What would you expect from us to do if A  waits for B to take stuff from the 
queue and on the same time
B waits for A for the same reason beside what we already do in NAT code, and 
that is to drop instead of wait.

--
Damjan


On 23 Apr 2018, at 14:14, Kingwel Xie 
> wrote:

Hi Ole, Damjan,

Thanks, for the comments.

But I’m afraid this is the typical case that workers handoff to each other if 
we don’t want to create an I/O thread which might become the bottleneck in the 
end.

Regards,
Kingwel

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, April 23, 2018 6:25 PM
To: Ole Troan >; Kingwel Xie 
>
Cc: vpp-dev >; Lollita Liu 
>
Subject: Re: [vpp-dev] Question of worker thread handoff


Yes, there are 2 options when handoff queue is full, drop or wait.
Wait gives you nice back-pressure mechanism as it will slow down input worker,
but it will not work in case when A handoffs to B and B handoffs to A.

--
Damjan



On 23 Apr 2018, at 10:50, Ole Troan 
> wrote:

Kingwei,

Yes, it's possible to dead lock in this case.
We had a similar issue with the NAT implementation. While testing I think we 
ended up dropping when the queue was full.

Best regards,
Ole



On 23 Apr 2018, at 10:33, Kingwel Xie 
> wrote:

Hi Damjan and all,

We are currently thinking of how to utilize the handoff mechanism to serve our 
application logic – to run a packet re-ordering and re-transmit queue in the 
same worker context to avoid any lock between threads. We come across with a 
question when looking into the implementation of handoff. Hope you can figure 
it out for us.

In vlib_get_worker_handoff_queue_elt -> vlib_get_frame_queue_elt  :

 /* Wait until a ring slot is available */
 while (new_tail >= fq->head_hint + fq->nelts)
vlib_worker_thread_barrier_check ();

We understand that the worker has wait for the any available slot from the 
other worker before putting buffer into it. Then the question is: is it a 
potential dead lock that two workers waiting for each other? F.g., Worker A and 
B, A is going to handoff to B but unfortunately at the same time B has the same 
thing to do to A, then they are both waiting forever. If it is true, is it 
better to drop the packet when the ring is full?

I copied my colleague Lollita into this discussion, she is working on it and 
knows better about this hypothesis.

Regards,
Kingwel







Re: [vpp-dev] VPP with DPDK eventdev

2018-04-23 Thread Damjan Marion

I’m not aware of such activities, but before you start working on the actual 
implementation i would suggest community discussion on architecture, use cases 
and required changes in the code. Biweekly community call is likely good forum 
for that.

— 
Damjan

> On 22 Apr 2018, at 07:17, Andrew Pinski  wrote:
> 
> Hi all,
>  We were wondering if anyone is currently working on getting VPP to
> work with the DPDK eventdev?  We don't want to duplication.
> 
> Thanks,
> Andrew
> 
> 
> 

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9033): https://lists.fd.io/g/vpp-dev/message/9033
View All Messages In Topic (2): https://lists.fd.io/g/vpp-dev/topic/18050545
Mute This Topic: https://lists.fd.io/mt/18050545/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] VPP with DPDK eventdev

2018-04-23 Thread Andrew Pinski
Hi all,
  We were wondering if anyone is currently working on getting VPP to
work with the DPDK eventdev?  We don't want to duplication.

Thanks,
Andrew

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9032): https://lists.fd.io/g/vpp-dev/message/9032
View All Messages In Topic (1): https://lists.fd.io/g/vpp-dev/topic/18050545
Mute This Topic: https://lists.fd.io/mt/18050545/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Config file for QoS

2018-04-23 Thread Reza Mirzaei
Hi 

I want to set qos parameters according to this tutorial [1], but i
couldn't find the file for port, subport and pipe configurations. Can
you please help me in this matter? 

Best regards 

Reza 

Links:
--
[1] https://docs.fd.io/vpp/16.12/qos_doc.html


[vpp-dev] Reminder: stable/1804 will release on Wednesday!

2018-04-23 Thread Chris Luke
All,

A reminder that VPP 18.04 will be released this Wednesday, April 25th.

We're still accepting critical bug fixes, but at this point I expect few 
patches between now and then. That said, feedback on the stability of the 
branch is always welcome.

Cheers,
Chris.


Re: [vpp-dev] #vpp vpls configuration

2018-04-23 Thread Neale Ranns
Hi,

Please try:

  https://gerrit.fd.io/r/12014

you don’t need a next=hop when specifying an L2 receive path, so:
  mpls local-label add eos 1023 via l2-input-on mpls-tunnel0

regards,
neale

From:  on behalf of "omid via Lists.Fd.Io" 

Re: [vpp-dev] question about set ip arp

2018-04-23 Thread Neale Ranns
HI Xyxue,

Can you please test to see if the situation improves with:
  https://gerrit.fd.io/r/#/c/12012/

thanks,
neale

From:  on behalf of xyxue 
Date: Friday, 20 April 2018 at 11:31
To: "vpp-dev@lists.fd.io" 
Subject: [vpp-dev] question about set ip arp


Hi guys,

I'm testing 'set ip arp' . When I don't configure the param 'no-fib-entry' , 
the configuration time of 100k cost 19+ mins. When I configure  the param 
'no-fib-entry' the time is 9 s.
Can I use 'set ip arp ... + no-fib-entry  and ip route add ' achieve the same 
goal with 'set ip arp without no-fib-entry'?

The most time-consuming part is 'clib_bihash_foreach_key_value_pair_24_8' .  
The stack info is shown below:
0 clib_bihash_foreach_key_value_pair_24_8 (h=0x7fffb5d4c840, 
callback=0x7719c98d , arg=0x7fffb5d33dc0)
at /home/vpp/build-data/../src/vppinfra/bihash_template.c:589
#1 0x7719cafd in adj_nbr_walk_nh4 (sw_if_index=1, addr=0x7fffb5d4c0f8, 
cb=0x76cacb17 , ctx=0x7fffb5d4c0f4)
at /home/vpp/build-data/../src/vnet/adj/adj_nbr.c:642
#2 0x76cacd64 in arp_update_adjacency (vnm=0x7763a540 , 
sw_if_index=1, ai=1) at /home/vpp/build-data/../src/vnet/ethernet/arp.c:466
#3 0x76cbb6fe in ethernet_update_adjacency (vnm=0x7763a540 
, sw_if_index=1, ai=1) at 
/home/vpp/build-data/../src/vnet/ethernet/interface.c:208
#4 0x771aca55 in vnet_update_adjacency_for_sw_interface 
(vnm=0x7763a540 , sw_if_index=1, ai=1)
at /home/vpp/build-data/../src/vnet/adj/rewrite.c:225
#5 0x7719c201 in adj_nbr_add_or_lock (nh_proto=FIB_PROTOCOL_IP4, 
link_type=VNET_LINK_IP4, nh_addr=0x7fffb5d47ab0, sw_if_index=1)
at /home/vpp/build-data/../src/vnet/adj/adj_nbr.c:246
#6 0x7718eb6a in fib_path_attached_next_hop_get_adj 
(path=0x7fffb5d47a88, link=VNET_LINK_IP4) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:664
#7 0x7718ebc8 in fib_path_attached_next_hop_set (path=0x7fffb5d47a88) 
at /home/vpp/build-data/../src/vnet/fib/fib_path.c:678
#8 0x77191077 in fib_path_resolve (path_index=14) at 
/home/vpp/build-data/../src/vnet/fib/fib_path.c:1862
#9 0x7718adb4 in fib_path_list_resolve (path_list=0x7fffb5ade9a4) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:567
#10 0x7718b27d in fib_path_list_create (flags=FIB_PATH_LIST_FLAG_NONE, 
rpaths=0x7fffb5d4c56c) at 
/home/vpp/build-data/../src/vnet/fib/fib_path_list.c:734
#11 0x77185732 in fib_entry_src_adj_path_swap (src=0x7fffb5c3aa94, 
entry=0x7fffb5d3ad2c, pl_flags=FIB_PATH_LIST_FLAG_NONE, paths=0x7fffb5d4c56c)
at /home/vpp/build-data/../src/vnet/fib/fib_entry_src_adj.c:110
#12 0x77181ed7 in fib_entry_src_action_path_swap 
(fib_entry=0x7fffb5d3ad2c, source=FIB_SOURCE_ADJ, 
flags=FIB_ENTRY_FLAG_ATTACHED, rpaths=0x7fffb5d4c56c)
at /home/vpp/build-data/../src/vnet/fib/fib_entry_src.c:1191
#13 0x7717d63c in fib_entry_create (fib_index=0, prefix=0x7fffb5d34400, 
source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, paths=0x7fffb5d4c56c)
at /home/vpp/build-data/../src/vnet/fib/fib_entry.c:828
#14 0x7716dcca in fib_table_entry_path_add2 (fib_index=0, 
prefix=0x7fffb5d34400, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
rpath=0x7fffb5d4c56c)
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:597
#15 0x7716dba9 in fib_table_entry_path_add (fib_index=0, 
prefix=0x7fffb5d34400, source=FIB_SOURCE_ADJ, flags=FIB_ENTRY_FLAG_ATTACHED, 
next_hop_proto=DPO_PROTO_IP4,
next_hop=0x7fffb5d34404, next_hop_sw_if_index=1, next_hop_fib_index=4294967295, 
next_hop_weight=1, next_hop_labels=0x0, path_flags=FIB_ROUTE_PATH_FLAG_NONE)
at /home/vpp/build-data/../src/vnet/fib/fib_table.c:569
#16 0x76cacef5 in arp_adj_fib_add (e=0x7fffb5d4c0f4, fib_index=0) at 
/home/vpp/build-data/../src/vnet/ethernet/arp.c:550
#17 0x76cad644 in vnet_arp_set_ip4_over_ethernet_internal 
(vnm=0x7763a540 , args=0x7fffb5d34700)
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:618
#18 0x76cb2f1a in set_ip4_over_ethernet_rpc_callback (a=0x7fffb5d34700) 
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:1989
#19 0x779442c9 in vl_api_rpc_call_main_thread_inline (fp=0x76cb2e09 
, data=0x7fffb5d34700 "\001", 
data_length=28,
force_rpc=0 '\000') at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2061
#20 0x7794441c in vl_api_rpc_call_main_thread (fp=0x76cb2e09 
, data=0x7fffb5d34700 "\001", 
data_length=28)
at /home/vpp/build-data/../src/vlibmemory/memory_vlib.c:2107
#21 0x76cb35c7 in vnet_arp_set_ip4_over_ethernet (vnm=0x7763a540 
, sw_if_index=1, a_arg=0x7fffb5d34800, is_static=0, 
is_no_fib_entry=0)
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2074
#22 0x76cb4015 in ip_arp_add_del_command_fn (vm=0x77923420 
, is_del=0, input=0x7fffb5d34ec0, cmd=0x7fffb5c78864)
at /home/vpp/build-data/../src/vnet/ethernet/arp.c:2233


Thanks,
Xyxue








Re: [vpp-dev] Question of worker thread handoff

2018-04-23 Thread Damjan Marion

Dear Kingwel,

What would you expect from us to do if A  waits for B to take stuff from the 
queue and on the same time
B waits for A for the same reason beside what we already do in NAT code, and 
that is to drop instead of wait.

--
Damjan

On 23 Apr 2018, at 14:14, Kingwel Xie 
> wrote:

Hi Ole, Damjan,

Thanks, for the comments.

But I’m afraid this is the typical case that workers handoff to each other if 
we don’t want to create an I/O thread which might become the bottleneck in the 
end.

Regards,
Kingwel

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, April 23, 2018 6:25 PM
To: Ole Troan >; Kingwel Xie 
>
Cc: vpp-dev >; Lollita Liu 
>
Subject: Re: [vpp-dev] Question of worker thread handoff


Yes, there are 2 options when handoff queue is full, drop or wait.
Wait gives you nice back-pressure mechanism as it will slow down input worker,
but it will not work in case when A handoffs to B and B handoffs to A.

--
Damjan


On 23 Apr 2018, at 10:50, Ole Troan 
> wrote:

Kingwei,

Yes, it's possible to dead lock in this case.
We had a similar issue with the NAT implementation. While testing I think we 
ended up dropping when the queue was full.

Best regards,
Ole


On 23 Apr 2018, at 10:33, Kingwel Xie 
> wrote:

Hi Damjan and all,

We are currently thinking of how to utilize the handoff mechanism to serve our 
application logic – to run a packet re-ordering and re-transmit queue in the 
same worker context to avoid any lock between threads. We come across with a 
question when looking into the implementation of handoff. Hope you can figure 
it out for us.

In vlib_get_worker_handoff_queue_elt -> vlib_get_frame_queue_elt  :

 /* Wait until a ring slot is available */
 while (new_tail >= fq->head_hint + fq->nelts)
vlib_worker_thread_barrier_check ();

We understand that the worker has wait for the any available slot from the 
other worker before putting buffer into it. Then the question is: is it a 
potential dead lock that two workers waiting for each other? F.g., Worker A and 
B, A is going to handoff to B but unfortunately at the same time B has the same 
thing to do to A, then they are both waiting forever. If it is true, is it 
better to drop the packet when the ring is full?

I copied my colleague Lollita into this discussion, she is working on it and 
knows better about this hypothesis.

Regards,
Kingwel







Re: [vpp-dev] Question of worker thread handoff

2018-04-23 Thread Kingwel Xie
Hi Ole, Damjan,

Thanks, for the comments.

But I’m afraid this is the typical case that workers handoff to each other if 
we don’t want to create an I/O thread which might become the bottleneck in the 
end.

Regards,
Kingwel

From: Damjan Marion (damarion) [mailto:damar...@cisco.com]
Sent: Monday, April 23, 2018 6:25 PM
To: Ole Troan ; Kingwel Xie 
Cc: vpp-dev ; Lollita Liu 
Subject: Re: [vpp-dev] Question of worker thread handoff


Yes, there are 2 options when handoff queue is full, drop or wait.
Wait gives you nice back-pressure mechanism as it will slow down input worker,
but it will not work in case when A handoffs to B and B handoffs to A.

--
Damjan


On 23 Apr 2018, at 10:50, Ole Troan 
> wrote:

Kingwei,

Yes, it's possible to dead lock in this case.
We had a similar issue with the NAT implementation. While testing I think we 
ended up dropping when the queue was full.

Best regards,
Ole


On 23 Apr 2018, at 10:33, Kingwel Xie 
> wrote:

Hi Damjan and all,

We are currently thinking of how to utilize the handoff mechanism to serve our 
application logic – to run a packet re-ordering and re-transmit queue in the 
same worker context to avoid any lock between threads. We come across with a 
question when looking into the implementation of handoff. Hope you can figure 
it out for us.

In vlib_get_worker_handoff_queue_elt -> vlib_get_frame_queue_elt  :

 /* Wait until a ring slot is available */
 while (new_tail >= fq->head_hint + fq->nelts)
vlib_worker_thread_barrier_check ();

We understand that the worker has wait for the any available slot from the 
other worker before putting buffer into it. Then the question is: is it a 
potential dead lock that two workers waiting for each other? F.g., Worker A and 
B, A is going to handoff to B but unfortunately at the same time B has the same 
thing to do to A, then they are both waiting forever. If it is true, is it 
better to drop the packet when the ring is full?

I copied my colleague Lollita into this discussion, she is working on it and 
knows better about this hypothesis.

Regards,
Kingwel








Re: [vpp-dev] Question of worker thread handoff

2018-04-23 Thread Ole Troan
Kingwei,

Yes, it's possible to dead lock in this case.
We had a similar issue with the NAT implementation. While testing I think we 
ended up dropping when the queue was full.

Best regards,
Ole

> On 23 Apr 2018, at 10:33, Kingwel Xie  wrote:
> 
> Hi Damjan and all,
> 
> We are currently thinking of how to utilize the handoff mechanism to serve 
> our application logic – to run a packet re-ordering and re-transmit queue in 
> the same worker context to avoid any lock between threads. We come across 
> with a question when looking into the implementation of handoff. Hope you can 
> figure it out for us.
> 
> In vlib_get_worker_handoff_queue_elt -> vlib_get_frame_queue_elt  :
> 
>   /* Wait until a ring slot is available */
>   while (new_tail >= fq->head_hint + fq->nelts)
> vlib_worker_thread_barrier_check ();
> 
> We understand that the worker has wait for the any available slot from the 
> other worker before putting buffer into it. Then the question is: is it a 
> potential dead lock that two workers waiting for each other? F.g., Worker A 
> and B, A is going to handoff to B but unfortunately at the same time B has 
> the same thing to do to A, then they are both waiting forever. If it is true, 
> is it better to drop the packet when the ring is full?
> 
> I copied my colleague Lollita into this discussion, she is working on it and 
> knows better about this hypothesis.
> 
> Regards,
> Kingwel
> 
> 
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9024): https://lists.fd.io/g/vpp-dev/message/9024
View All Messages In Topic (2): https://lists.fd.io/g/vpp-dev/topic/18048596
Mute This Topic: https://lists.fd.io/mt/18048596/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-



signature.asc
Description: Message signed with OpenPGP


[vpp-dev] Question of worker thread handoff

2018-04-23 Thread Kingwel Xie
Hi Damjan and all,

We are currently thinking of how to utilize the handoff mechanism to serve our 
application logic – to run a packet re-ordering and re-transmit queue in the 
same worker context to avoid any lock between threads. We come across with a 
question when looking into the implementation of handoff. Hope you can figure 
it out for us.

In vlib_get_worker_handoff_queue_elt -> vlib_get_frame_queue_elt  :

  /* Wait until a ring slot is available */
  while (new_tail >= fq->head_hint + fq->nelts)
vlib_worker_thread_barrier_check ();

We understand that the worker has wait for the any available slot from the 
other worker before putting buffer into it. Then the question is: is it a 
potential dead lock that two workers waiting for each other? F.g., Worker A and 
B, A is going to handoff to B but unfortunately at the same time B has the same 
thing to do to A, then they are both waiting forever. If it is true, is it 
better to drop the packet when the ring is full?

I copied my colleague Lollita into this discussion, she is working on it and 
knows better about this hypothesis.

Regards,
Kingwel





Re: [vpp-dev] mheap performance issue and fixup

2018-04-23 Thread Kingwel Xie
Did you delete the handle of the shared memory? These two files:  
/dev/shm/global_vm  vpe-api

The mheap header structure is changed, so you have to ask vPP to re-create the 
shared memory heap.

Sorry, I forgot to mention that before.


From: 薛欣颖 [mailto:xy...@fiberhome.com]
Sent: Monday, April 23, 2018 3:25 PM
To: Kingwel Xie ; Damjan Marion ; 
nranns 
Cc: vpp-dev 
Subject: Re: Re: [vpp-dev] mheap performance issue and fixup

Hi Kingwel,

After I merged the three patch ,there is a SIGSEGV when I startup vpp (not 
every time). And the error didn't appear before .
Is there anything I can do to fix it?

Program received signal SIGSEGV, Segmentation fault.
clib_mem_alloc_aligned_at_offset (size=54, align=4, align_offset=4, 
os_out_of_memory_on_failure=1) at /home/vpp/build-data/../src/vppinfra/mem.h:90
90 cpu = os_get_thread_index ();
(gdb) bt
#0 clib_mem_alloc_aligned_at_offset (size=54, align=4, align_offset=4, 
os_out_of_memory_on_failure=1) at /home/vpp/build-data/../src/vppinfra/mem.h:90
#1 0x7697fcde in vec_resize_allocate_memory (v=0x0, 
length_increment=50, data_bytes=54, header_bytes=4, data_align=4)
at /home/vpp/build-data/../src/vppinfra/vec.c:59
#2 0x769313c7 in _vec_resize (v=0x0, length_increment=50, 
data_bytes=50, header_bytes=0, data_align=0)
at /home/vpp/build-data/../src/vppinfra/vec.h:142
#3 0x769322bb in do_percent (_s=0x7fffb6cee348, fmt=0x76995c90 
"%s:%d (%s) assertion `%s' fails", va=0x7fffb6cee3e0)
at /home/vpp/build-data/../src/vppinfra/format.c:339
#4 0x76932703 in va_format (s=0x0, fmt=0x76995c90 "%s:%d (%s) 
assertion `%s' fails", va=0x7fffb6cee3e0)
at /home/vpp/build-data/../src/vppinfra/format.c:402
#5 0x7692ce4e in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x76995c90 "%s:%d (%s) assertion `%s' fails")
at /home/vpp/build-data/../src/vppinfra/error.c:127
#6 0x769496a3 in mheap_get_search_free_bin (v=0x3000a000, bin=12, 
n_user_data_bytes_arg=0x7fffb6cee6b0, align=4, align_offset=0)
at /home/vpp/build-data/../src/vppinfra/mheap.c:401
#7 0x76949e86 in mheap_get_search_free_list (v=0x3000a000, 
n_user_bytes_arg=0x7fffb6cee6b0, align=4, align_offset=0)
at /home/vpp/build-data/../src/vppinfra/mheap.c:569
#8 0x7694a326 in mheap_get_aligned (v=0x3000a000, n_user_data_bytes=56, 
align=4, align_offset=0, offset_return=0x7fffb6cee758)
at /home/vpp/build-data/../src/vppinfra/mheap.c:700
#9 0x7697f91e in clib_mem_alloc_aligned_at_offset (size=54, align=4, 
align_offset=4, os_out_of_memory_on_failure=1)
at /home/vpp/build-data/../src/vppinfra/mem.h:92
#10 0x7697fcde in vec_resize_allocate_memory (v=0x0, 
length_increment=50, data_bytes=54, header_bytes=4, data_align=4)
at /home/vpp/build-data/../src/vppinfra/vec.c:59
#11 0x769313c7 in _vec_resize (v=0x0, length_increment=50, 
data_bytes=50, header_bytes=0, data_align=0)
at /home/vpp/build-data/../src/vppinfra/vec.h:142
#12 0x769322bb in do_percent (_s=0x7fffb6ceea78, fmt=0x76995c90 
"%s:%d (%s) assertion `%s' fails", va=0x7fffb6ceeb10)
at /home/vpp/build-data/../src/vppinfra/format.c:339
#13 0x76932703 in va_format (s=0x0, fmt=0x76995c90 "%s:%d (%s) 
assertion `%s' fails", va=0x7fffb6ceeb10)
at /home/vpp/build-data/../src/vppinfra/format.c:402
#14 0x7692ce4e in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x76995c90 "%s:%d (%s) assertion `%s' fails")
at /home/vpp/build-data/../src/vppinfra/error.c:127
#15 0x769496a3 in mheap_get_search_free_bin (v=0x3000a000, bin=12, 
n_user_data_bytes_arg=0x7fffb6ceede0, align=4, align_offset=0)
at /home/vpp/build-data/../src/vppinfra/mheap.c:401
#16 0x76949e86 in mheap_get_search_free_list (v=0x3000a000, 
n_user_bytes_arg=0x7fffb6ceede0, align=4, align_offset=0)
at /home/vpp/build-data/../src/vppinfra/mheap.c:569
#17 0x7694a326 in mheap_get_aligned (v=0x3000a000, 
n_user_data_bytes=56, align=4, align_offset=0, offset_return=0x7fffb6ceee88)
at /home/vpp/build-data/../src/vppinfra/mheap.c:700
#18 0x7697f91e in clib_mem_alloc_aligned_at_offset (size=54, align=4, 
align_offset=4, os_out_of_memory_on_failure=1)
at /home/vpp/build-data/../src/vppinfra/mem.h:92
#19 0x7697fcde in vec_resize_allocate_memory (v=0x0, 
length_increment=50, data_bytes=54, header_bytes=4, data_align=4)
---Type  to continue, or q  to quit---
at /home/vpp/build-data/../src/vppinfra/vec.c:59
#20 0x769313c7 in _vec_resize (v=0x0, length_increment=50, 
data_bytes=50, header_bytes=0, data_align=0)
at /home/vpp/build-data/../src/vppinfra/vec.h:142
#21 0x769322bb in do_percent (_s=0x7fffb6cef1a8, fmt=0x76995c90 
"%s:%d (%s) assertion `%s' fails", va=0x7fffb6cef240)
at /home/vpp/build-data/../src/vppinfra/format.c:339
#22 0x76932703 in va_format (s=0x0, fmt=0x76995c90 "%s:%d (%s) 
assertion `%s' fails", 

[vpp-dev] #vpp vpls configuration

2018-04-23 Thread omid via Lists.Fd.Io
Hi, I configure vpls with  following commands:
VPLS configuration: MPLS L2VPN VPLS     PE1 
1.create host-interface name eth0 2.create host-interface name eth1 3.set int 
state host-eth1 up 4.set int state host-eth0 up 5.set interface mac address 
host-eth0 00:03:7F:FF:FF:FF 6.set interface mac address host-eth1 
00:03:7F:FF:FF:FE 7.set int ip address host-eth1 2.1.1.1/24 8.set interface 
mpls host-eth1 enable 9.mpls tunnel l2-only via 2.1.1.2 host-eth1 out-label 34 
out-label 33 10.set int state mpls-tunnel0 up 11.set interface l2 bridge 
mpls-tunnel0 1 12.set interface l2 bridge host-eth0 1 13.mpls local-label add 
eos 1023 via 2.1.1.2 l2-input-on mpls-tunnel0 14.mpls local-label add non-eos 
1024 mpls-lookup-in-table 0

but,when I configure vpp with command line 13. mpls local-label add eos 1023 
via 2.1.1.2 l2-input-on mpls-tunnel0
VPP crashes and exites the command line. Does anyone know what is its cause?


Re: [vpp-dev] mheap performance issue and fixup

2018-04-23 Thread xyxue
Hi Kingwel,

After I merged the three patch ,there is a SIGSEGV when I startup vpp (not 
every time). And the error didn't appear before . 
Is there anything I can do to fix it?

Program received signal SIGSEGV, Segmentation fault. 
clib_mem_alloc_aligned_at_offset (size=54, align=4, align_offset=4, 
os_out_of_memory_on_failure=1) at /home/vpp/build-data/../src/vppinfra/mem.h:90 
90 cpu = os_get_thread_index (); 
(gdb) bt 
#0 clib_mem_alloc_aligned_at_offset (size=54, align=4, align_offset=4, 
os_out_of_memory_on_failure=1) at /home/vpp/build-data/../src/vppinfra/mem.h:90 
#1 0x7697fcde in vec_resize_allocate_memory (v=0x0, 
length_increment=50, data_bytes=54, header_bytes=4, data_align=4) 
at /home/vpp/build-data/../src/vppinfra/vec.c:59 
#2 0x769313c7 in _vec_resize (v=0x0, length_increment=50, 
data_bytes=50, header_bytes=0, data_align=0) 
at /home/vpp/build-data/../src/vppinfra/vec.h:142 
#3 0x769322bb in do_percent (_s=0x7fffb6cee348, fmt=0x76995c90 
"%s:%d (%s) assertion `%s' fails", va=0x7fffb6cee3e0) 
at /home/vpp/build-data/../src/vppinfra/format.c:339 
#4 0x76932703 in va_format (s=0x0, fmt=0x76995c90 "%s:%d (%s) 
assertion `%s' fails", va=0x7fffb6cee3e0) 
at /home/vpp/build-data/../src/vppinfra/format.c:402 
#5 0x7692ce4e in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x76995c90 "%s:%d (%s) assertion `%s' fails") 
at /home/vpp/build-data/../src/vppinfra/error.c:127 
#6 0x769496a3 in mheap_get_search_free_bin (v=0x3000a000, bin=12, 
n_user_data_bytes_arg=0x7fffb6cee6b0, align=4, align_offset=0) 
at /home/vpp/build-data/../src/vppinfra/mheap.c:401 
#7 0x76949e86 in mheap_get_search_free_list (v=0x3000a000, 
n_user_bytes_arg=0x7fffb6cee6b0, align=4, align_offset=0) 
at /home/vpp/build-data/../src/vppinfra/mheap.c:569 
#8 0x7694a326 in mheap_get_aligned (v=0x3000a000, n_user_data_bytes=56, 
align=4, align_offset=0, offset_return=0x7fffb6cee758) 
at /home/vpp/build-data/../src/vppinfra/mheap.c:700 
#9 0x7697f91e in clib_mem_alloc_aligned_at_offset (size=54, align=4, 
align_offset=4, os_out_of_memory_on_failure=1) 
at /home/vpp/build-data/../src/vppinfra/mem.h:92 
#10 0x7697fcde in vec_resize_allocate_memory (v=0x0, 
length_increment=50, data_bytes=54, header_bytes=4, data_align=4) 
at /home/vpp/build-data/../src/vppinfra/vec.c:59 
#11 0x769313c7 in _vec_resize (v=0x0, length_increment=50, 
data_bytes=50, header_bytes=0, data_align=0) 
at /home/vpp/build-data/../src/vppinfra/vec.h:142 
#12 0x769322bb in do_percent (_s=0x7fffb6ceea78, fmt=0x76995c90 
"%s:%d (%s) assertion `%s' fails", va=0x7fffb6ceeb10) 
at /home/vpp/build-data/../src/vppinfra/format.c:339 
#13 0x76932703 in va_format (s=0x0, fmt=0x76995c90 "%s:%d (%s) 
assertion `%s' fails", va=0x7fffb6ceeb10) 
at /home/vpp/build-data/../src/vppinfra/format.c:402 
#14 0x7692ce4e in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x76995c90 "%s:%d (%s) assertion `%s' fails") 
at /home/vpp/build-data/../src/vppinfra/error.c:127 
#15 0x769496a3 in mheap_get_search_free_bin (v=0x3000a000, bin=12, 
n_user_data_bytes_arg=0x7fffb6ceede0, align=4, align_offset=0) 
at /home/vpp/build-data/../src/vppinfra/mheap.c:401 
#16 0x76949e86 in mheap_get_search_free_list (v=0x3000a000, 
n_user_bytes_arg=0x7fffb6ceede0, align=4, align_offset=0) 
at /home/vpp/build-data/../src/vppinfra/mheap.c:569 
#17 0x7694a326 in mheap_get_aligned (v=0x3000a000, 
n_user_data_bytes=56, align=4, align_offset=0, offset_return=0x7fffb6ceee88)
at /home/vpp/build-data/../src/vppinfra/mheap.c:700 
#18 0x7697f91e in clib_mem_alloc_aligned_at_offset (size=54, align=4, 
align_offset=4, os_out_of_memory_on_failure=1) 
at /home/vpp/build-data/../src/vppinfra/mem.h:92 
#19 0x7697fcde in vec_resize_allocate_memory (v=0x0, 
length_increment=50, data_bytes=54, header_bytes=4, data_align=4) 
---Type  to continue, or q  to quit--- 
at /home/vpp/build-data/../src/vppinfra/vec.c:59 
#20 0x769313c7 in _vec_resize (v=0x0, length_increment=50, 
data_bytes=50, header_bytes=0, data_align=0) 
at /home/vpp/build-data/../src/vppinfra/vec.h:142 
#21 0x769322bb in do_percent (_s=0x7fffb6cef1a8, fmt=0x76995c90 
"%s:%d (%s) assertion `%s' fails", va=0x7fffb6cef240) 
at /home/vpp/build-data/../src/vppinfra/format.c:339 
#22 0x76932703 in va_format (s=0x0, fmt=0x76995c90 "%s:%d (%s) 
assertion `%s' fails", va=0x7fffb6cef240) 
at /home/vpp/build-data/../src/vppinfra/format.c:402 
#23 0x7692ce4e in _clib_error (how_to_die=2, function_name=0x0, 
line_number=0, fmt=0x76995c90 "%s:%d (%s) assertion `%s' fails") 
at /home/vpp/build-data/../src/vppinfra/error.c:127 
#24 0x769496a3 in mheap_get_search_free_bin (v=0x3000a000, bin=12, 
n_user_data_bytes_arg=0x7fffb6cef510, align=4, align_offset=0) 
at /home/vpp/build-data/../src/vppinfra/mheap.c:401 
#25 

Re: [vpp-dev] Dynamically attach to a PCI device

2018-04-23 Thread Avi Cohen (A)
I'll clarify my question
For a PCI device e.g. :04:00:0  - is it possible to add this device  to VPP 
on runtime with a specific cli command ?
Currently I  only can add these kind of devices  by specifying it in the  
while-list in startup.conf
Best Regards
Avi

> -Original Message-
> From: Avi Cohen (A)
> Sent: Monday, 16 April, 2018 10:48 AM
> To: vpp-dev@lists.fd.io
> Subject: Dynamically attach to a PCI device
> 
> Hello All
> 
> Currently I attach the physical NICs PCI devices by setting the white list in 
> the
> startup.conf How can I attach it dynamically in runtime ?
> 
> Best Regards
> Avi


-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#9019): https://lists.fd.io/g/vpp-dev/message/9019
View All Messages In Topic (2): https://lists.fd.io/g/vpp-dev/topic/17425326
Mute This Topic: https://lists.fd.io/mt/17425326/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-