[vpp-dev] 464XLAT and MAP-T

2019-04-08 Thread david . leitch . vpp
[Edited Message Follows]

hi,

I want to test 464XLAT feature of VPP19.04 . I tried to build and configure the 
NAT and MAP plugin using this guide:
https://wiki.fd.io/view/VPP/NAT#464XLAT

But some CLI command changed , So I using this Configuration :

client (Linux):

sudo ifconfig enp0s8 up sudo ifconfig enp0s8 192.168.5.1/24 sudo ip route add 
192.168.50.0/24 via 192.168.5.2

server (Linux):

sudo ifconfig enp0s10 up sudo ifconfig enp0s10 192.168.50.1/24 sudo ip route 
add 192.168.40.0/24 via 192.168.50.2

CE (VPP):

set int state GigabitEthernet0/8/0 up set int state GigabitEthernet0/9/0 up set 
int ip address GigabitEthernet0/8/0 192.168.5.2/24 set int ip address 
GigabitEthernet0/9/0 9::1/64 ip route add ::/0 via 9::2

map interface GigabitEthernet0/8/0 map-t map interface GigabitEthernet0/9/0 
map-t map add domain ip4-pfx 0.0.0.0/0 ip6-pfx 1:2:3::/96 ip6-src 2001:db8::/96 
ea-bits-len 0 psid-offset 0 psid-len 0 mtu 9206

PE (VPP):

set int state GigabitEthernet0/9/0 up set int state GigabitEthernet0/a/0 up set 
int ip address GigabitEthernet0/9/0 9::2/64 set int ip address 
GigabitEthernet0/a/0 192.168.50.2/24 ip route add ::/0 via 9::1 set int nat64 
in GigabitEthernet0/9/0 set int nat64 out GigabitEthernet0/a/0 nat64 add prefix 
1:2:3::/96 
nat64 add pool address 192.168.40.1 - 192.168.40.254 
I have some problems on VPP19.04 but VPP18.10 works

*first* : coredump on " show map domain " commands

vpp# show map domain index 1
show map domain: MAP domain does not exists 1
vpp# show map domain index 0
 
Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x7fffb1b271ac in format_map_domain (s=0x0, args=) at 
/root/vpp/src/plugins/map/map.c:940
940  s = format (s,
(gdb) bt
#0  0x7fffb1b271ac in format_map_domain (s=0x0, args=) at 
/root/vpp/src/plugins/map/map.c:940
#1  0x767abea9 in do_percent (va=0x7fffb6b0cab8, fmt=0x7fffb1b2f5fd 
"%U", _s=) at /root/vpp/src/vppinfra/format.c:373
#2  va_format (s=s@entry=0x0, fmt=, va=va@entry=0x7fffb6b0cab8) 
at /root/vpp/src/vppinfra/format.c:404
#3  0x76c7aaf9 in vlib_cli_output (vm=vm@entry=0x76ef66c0 
, fmt=fmt@entry=0x7fffb1b2f5fd "%U") at 
/root/vpp/src/vlib/cli.c:732
#4  0x7fffb1b261f8 in show_map_domain_command_fn (vm=0x76ef66c0 
, input=, cmd=) at 
/root/vpp/src/plugins/map/map.c:1065
#5  0x76c7adee in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x76ef66c0 , cm=cm@entry=0x76ef68c0 
, 
    input=input@entry=0x7fffb6b0cf60, parent_command_index=) at 
/root/vpp/src/vlib/cli.c:607
#6  0x76c7b274 in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x76ef66c0 , cm=cm@entry=0x76ef68c0 
, 
    input=input@entry=0x7fffb6b0cf60, parent_command_index=) at 
/root/vpp/src/vlib/cli.c:568
#7  0x76c7b274 in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x76ef66c0 , cm=cm@entry=0x76ef68c0 
, 
    input=input@entry=0x7fffb6b0cf60, 
parent_command_index=parent_command_index@entry=0) at 
/root/vpp/src/vlib/cli.c:568
#8  0x76c7b790 in vlib_cli_input (vm=0x76ef66c0 , 
input=input@entry=0x7fffb6b0cf60, 
    function=function@entry=0x76cd60d0 , 
function_arg=function_arg@entry=0) at /root/vpp/src/vlib/cli.c:707
#9  0x76cd8035 in unix_cli_process_input (cm=0x76ef7000 
, cli_file_index=0) at /root/vpp/src/vlib/unix/cli.c:2420
#10 unix_cli_process (vm=0x76ef66c0 , rt=0x7fffb6afc000, 
f=) at /root/vpp/src/vlib/unix/cli.c:2536
#11 0x76c93c06 in vlib_process_bootstrap (_a=) at 
/root/vpp/src/vlib/main.c:1469
#12 0x767b517c in clib_calljmp () from 
/usr/lib/x86_64-linux-gnu/libvppinfra.so.19.04
#13 0x7fffb5dffae0 in ?? ()
#14 0x76c998a1 in vlib_process_startup (f=0x0, p=0x7fffb6afc000, 
vm=0x76ef66c0 ) at /root/vpp/src/vlib/main.c:1491
#15 dispatch_process (vm=0x76ef66c0 , p=0x7fffb6afc000, 
last_time_stamp=0, f=0x0) at /root/vpp/src/vlib/main.c:1536
#16 0x0482 in ?? ()
#17 0x0482 in ?? ()

*Second* : MAP-T plugin can not translate IP6 to IP4
at " /vpp/src/plugins/map/ ip6_map_t.c " file , line 537 and " ip6_map_t " 
function

          p0 = vlib_get_buffer (vm, pi0);
          ip60 = vlib_buffer_get_current (p0);
 
          d0 =
           ip6_map_get_domain (>dst_address,
                                _buffer (p0)->map_t.map_domain_index,
                                );
          if (!d0)
            {                   /* Guess it wasn't for us */
              vnet_feature_next (, p0);
              goto exit;
            }

ip6_map_get_domain function can not find map and return 0, so next_node will be 
ip6-lookup and icmp6 packet route to PE again !!!

It is a bug , or I used bad configuration ??
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12728): https://lists.fd.io/g/vpp-dev/message/12728
Mute This Topic: https://lists.fd.io/mt/30963059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  

[vpp-dev] 464XLAT and MAP-T

2019-04-08 Thread david . leitch . vpp
hi,

I want to test 464XLAT feature of VPP19.04 . I tried to build and configure the 
NAT and MAP plugin using this guide:
https://wiki.fd.io/view/VPP/NAT#464XLAT

But some CLI command changed , So I using this Configuration :

client (Linux):

sudo ifconfig enp0s8 up sudo ifconfig enp0s8 192.168.5.1/24 sudo ip route add 
192.168.50.0/24 via 192.168.5.2

server (Linux):

sudo ifconfig enp0s10 up sudo ifconfig enp0s10 192.168.50.1/24 sudo ip route 
add 192.168.40.0/24 via 192.168.50.2

CE (VPP):

set int state GigabitEthernet0/8/0 up set int state GigabitEthernet0/9/0 up set 
int ip address GigabitEthernet0/8/0 192.168.5.2/24 set int ip address 
GigabitEthernet0/9/0 9::1/64 ip route add ::/0 via 9::2

map interface GigabitEthernet0/9/0 map-t map interface GigabitEthernet0/a/0 
map-t map add domain ip4-pfx 0.0.0.0/0 ip6-pfx 1:2:3::/96 ip6-src 2001:db8::/96 
ea-bits-len 0 psid-offset 0 psid-len 0 mtu 9206

PE (VPP):

set int state GigabitEthernet0/9/0 up set int state GigabitEthernet0/a/0 up set 
int ip address GigabitEthernet0/9/0 9::2/64 set int ip address 
GigabitEthernet0/a/0 192.168.50.2/24 ip route add ::/0 via 9::1 set int nat64 
in GigabitEthernet0/9/0 set int nat64 out GigabitEthernet0/a/0 nat64 add prefix 
1:2:3::/96 
nat64 add pool address 192.168.40.1 - 192.168.40.254 
I have some problems on VPP19.04 but VPP18.10 works

*first* : coredump on " show map domain " commands

vpp# show map domain index 1
show map domain: MAP domain does not exists 1
vpp# show map domain index 0
 
Thread 1 "vpp_main" received signal SIGSEGV, Segmentation fault.
0x7fffb1b271ac in format_map_domain (s=0x0, args=) at 
/root/vpp/src/plugins/map/map.c:940
940  s = format (s,
(gdb) bt
#0  0x7fffb1b271ac in format_map_domain (s=0x0, args=) at 
/root/vpp/src/plugins/map/map.c:940
#1  0x767abea9 in do_percent (va=0x7fffb6b0cab8, fmt=0x7fffb1b2f5fd 
"%U", _s=) at /root/vpp/src/vppinfra/format.c:373
#2  va_format (s=s@entry=0x0, fmt=, va=va@entry=0x7fffb6b0cab8) 
at /root/vpp/src/vppinfra/format.c:404
#3  0x76c7aaf9 in vlib_cli_output (vm=vm@entry=0x76ef66c0 
, fmt=fmt@entry=0x7fffb1b2f5fd "%U") at 
/root/vpp/src/vlib/cli.c:732
#4  0x7fffb1b261f8 in show_map_domain_command_fn (vm=0x76ef66c0 
, input=, cmd=) at 
/root/vpp/src/plugins/map/map.c:1065
#5  0x76c7adee in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x76ef66c0 , cm=cm@entry=0x76ef68c0 
, 
    input=input@entry=0x7fffb6b0cf60, parent_command_index=) at 
/root/vpp/src/vlib/cli.c:607
#6  0x76c7b274 in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x76ef66c0 , cm=cm@entry=0x76ef68c0 
, 
    input=input@entry=0x7fffb6b0cf60, parent_command_index=) at 
/root/vpp/src/vlib/cli.c:568
#7  0x76c7b274 in vlib_cli_dispatch_sub_commands 
(vm=vm@entry=0x76ef66c0 , cm=cm@entry=0x76ef68c0 
, 
    input=input@entry=0x7fffb6b0cf60, 
parent_command_index=parent_command_index@entry=0) at 
/root/vpp/src/vlib/cli.c:568
#8  0x76c7b790 in vlib_cli_input (vm=0x76ef66c0 , 
input=input@entry=0x7fffb6b0cf60, 
    function=function@entry=0x76cd60d0 , 
function_arg=function_arg@entry=0) at /root/vpp/src/vlib/cli.c:707
#9  0x76cd8035 in unix_cli_process_input (cm=0x76ef7000 
, cli_file_index=0) at /root/vpp/src/vlib/unix/cli.c:2420
#10 unix_cli_process (vm=0x76ef66c0 , rt=0x7fffb6afc000, 
f=) at /root/vpp/src/vlib/unix/cli.c:2536
#11 0x76c93c06 in vlib_process_bootstrap (_a=) at 
/root/vpp/src/vlib/main.c:1469
#12 0x767b517c in clib_calljmp () from 
/usr/lib/x86_64-linux-gnu/libvppinfra.so.19.04
#13 0x7fffb5dffae0 in ?? ()
#14 0x76c998a1 in vlib_process_startup (f=0x0, p=0x7fffb6afc000, 
vm=0x76ef66c0 ) at /root/vpp/src/vlib/main.c:1491
#15 dispatch_process (vm=0x76ef66c0 , p=0x7fffb6afc000, 
last_time_stamp=0, f=0x0) at /root/vpp/src/vlib/main.c:1536
#16 0x0482 in ?? ()
#17 0x0482 in ?? ()

*Second* : MAP-T plugin can not translate IP6 to IP4
at " /vpp/src/plugins/map/ ip6_map_t.c " file , line 537 and " ip6_map_t " 
function

          p0 = vlib_get_buffer (vm, pi0);
          ip60 = vlib_buffer_get_current (p0);
 
          d0 =
           ip6_map_get_domain (>dst_address,
                                _buffer (p0)->map_t.map_domain_index,
                                );
          if (!d0)
            {                   /* Guess it wasn't for us */
              vnet_feature_next (, p0);
              goto exit;
            }

ip6_map_get_domain function can not find map and return 0, so next_node will be 
ip6-lookup and icmp6 packet route to PE again !!!

It is a bug , or I used bad configuration ??
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#12728): https://lists.fd.io/g/vpp-dev/message/12728
Mute This Topic: https://lists.fd.io/mt/30963059/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]

Re: [vpp-dev] L2 xconnect feature on multiple interface

2019-01-19 Thread david . leitch . vpp
If I change the mode of TenGE4/0/0 and TenGE4/0/1 to L3 Mode then tx/rx 
counters of TenGE7/0/0 and TenGE7/0/1 matches
and the rx counter of TenGE7/0/0 is not zero.
I have checked connectivity and sure that's correct beacuse if  i only use 
TenGE7/0/0 and TenGE7/0/0 in the L2XC mode
everything is fine and rx/tx counter matched

vpp# set int state TenGigabitEthernet7/0/0 up 
vpp# set int state TenGigabitEthernet7/0/1 up

vpp# set interface promiscuous on TenGigabitEthernet7/0/0
vpp# set interface promiscuous on TenGigabitEthernet7/0/1

vpp# set interface l2 xconnect TenGigabitEthernet7/0/1 TenGigabitEthernet7/0/0
vpp# set interface l2 xconnect TenGigabitEthernet7/0/0 TenGigabitEthernet7/0/1
 

vpp# show interface 
              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     
Counter          Count     
TenGigabitEthernet21/0/0          5      up          9000/0/0/0     rx packets  
                   1
                                                                    rx bytes    
                 135
                                                                    drops       
                   1
TenGigabitEthernet21/0/1          6      up          9000/0/0/0     rx packets  
                   1
                                                                    rx bytes    
                 135
                                                                    tx packets  
                2488
                                                                    tx bytes    
              104496
                                                                    drops       
                   1
TenGigabitEthernet24/0/0          7      up          9000/0/0/0     rx packets  
                   1
                                                                    rx bytes    
                 135
                                                                    drops       
                   1
TenGigabitEthernet24/0/1          8      up          9000/0/0/0     rx packets  
                   4
                                                                    rx bytes    
                 604
                                                                    tx packets  
              254663
                                                                    tx bytes    
            32960145
                                                                    drops       
                   1
                                                                    punt        
                   3
                                                                    ip4         
                   3
TenGigabitEthernet4/0/0           1      up          9000/0/0/0     rx packets  
              257326
                                                                    rx bytes    
            15439635
                                                                    drops       
              254651
                                                                    ip4         
              257325
TenGigabitEthernet4/0/1           2      up          9000/0/0/0     rx packets  
                   1
                                                                    rx bytes    
                 135
                                                                    drops       
                   1
TenGigabitEthernet7/0/0           3      up          9000/0/0/0     rx packets  
             2282578
                                                                    rx bytes    
           570469641
                                                                    tx packets  
             3007373
                                                                    tx bytes    
          3087583217
                                                                    tx-error    
                  28
TenGigabitEthernet7/0/1           4      up          9000/0/0/0     rx packets  
             3007373
                                                                    rx bytes    
          3087583217
                                                                    tx packets  
             2282578
                                                                    tx bytes    
           570469641
local0                            0     down          0/0/0/0       drops       
              254667

vpp# show mode 
l3 local0  
l3 TenGigabitEthernet4/0/0  
l3 TenGigabitEthernet4/0/1  
l2 xconnect TenGigabitEthernet7/0/0 TenGigabitEthernet7/0/1
l2 xconnect TenGigabitEthernet7/0/1 TenGigabitEthernet7/0/0
l3 TenGigabitEthernet21/0/0  
l3 TenGigabitEthernet21/0/1  
l3 TenGigabitEthernet24/0/0  
l3 TenGigabitEthernet24/0/1 

But it`s not possible to use multiple interface in L2XC mode at the same time
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11959): https://lists.fd.io/g/vpp-dev/message/11959
Mute This Topic: 

[vpp-dev] L2 xconnect feature on multiple interface

2019-01-19 Thread david . leitch . vpp
Hi,
 
I want to use the L2 xconnect or L2 bridge feature on multiple interface , i.e 
suppose I have 4 ports :
x1 and x2  and also  y1 and y2.
I want to only do

vpp# set interface l2 xconnect x1 x2
vpp# set interface l2 xconnect x2 x1
vpp# set interface l2 xconnect y1 y2
vpp# set interface l2 xconnect y2 y1

so that all traffic from port x1 is sent to port x2 and vice versa ,
also all traffic from port y1 is sent to port y2 and vice versa.

but only one set of them works at the same time , for example only x1 and x2 
works but y1 and y2 not working
if i change the mode of x1 and x2 interface to L3 mode , y1 and y2 interface 
will work in L2 mode

I test this situation with l2 bridge and i have the same problem.
May i know ,why it doesn't work if i set l2 (xconnect or bridge) for multiple 
interface ???

here is my configuration and commands :
*
for l2 xconnect test :*
vpp# set int state TenGigabitEthernet4/0/0 up
vpp# set int state TenGigabitEthernet4/0/1 up
vpp# set int state TenGigabitEthernet7/0/0 up
vpp# set int state TenGigabitEthernet7/0/1 up

vpp# set interface promiscuous on TenGigabitEthernet4/0/0
vpp# set interface promiscuous on TenGigabitEthernet4/0/1
vpp# set interface promiscuous on TenGigabitEthernet7/0/0
vpp# set interface promiscuous on TenGigabitEthernet7/0/1

vpp# set interface l2 xconnect TenGigabitEthernet4/0/0 TenGigabitEthernet4/0/1
vpp# set interface l2 xconnect TenGigabitEthernet4/0/1 TenGigabitEthernet4/0/0
vpp# set interface l2 xconnect TenGigabitEthernet7/0/1 TenGigabitEthernet7/0/0
vpp# set interface l2 xconnect TenGigabitEthernet7/0/0 TenGigabitEthernet7/0/1
 
vpp# show interface 
              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     
Counter          Count     
TenGigabitEthernet21/0/0          5     down         9000/0/0/0     
TenGigabitEthernet21/0/1          6     down         9000/0/0/0     
TenGigabitEthernet24/0/0          7     down         9000/0/0/0     
TenGigabitEthernet24/0/1          8     down         9000/0/0/0     
TenGigabitEthernet4/0/0           1      up          9000/0/0/0     rx packets  
             2803151
                                                                    rx bytes    
           700562045
                                                                    tx packets  
             3693049
                                                                    tx bytes    
          3791177782
                                                                    tx-error    
                  62
TenGigabitEthernet4/0/1           2      up          9000/0/0/0     rx packets  
             3693049
                                                                    rx bytes    
          3791177782
                                                                    tx packets  
             2803151
                                                                    tx bytes    
           700562045
TenGigabitEthernet7/0/0           3      up          9000/0/0/0     tx packets  
             4799610
                                                                    tx bytes    
          3764207865
TenGigabitEthernet7/0/1           4      up          9000/0/0/0     rx packets  
             4799610
                                                                    rx bytes    
          3764207865
local0                            0     down          0/0/0/0       
 

vpp# show mode 
l3 local0  
l2 xconnect TenGigabitEthernet4/0/0 TenGigabitEthernet4/0/1
l2 xconnect TenGigabitEthernet4/0/1 TenGigabitEthernet4/0/0
l2 xconnect TenGigabitEthernet7/0/0 TenGigabitEthernet7/0/1
l2 xconnect TenGigabitEthernet7/0/1 TenGigabitEthernet7/0/0
l3 TenGigabitEthernet21/0/0  
l3 TenGigabitEthernet21/0/1  
l3 TenGigabitEthernet24/0/0  
l3 TenGigabitEthernet24/0/1 

I have checked connectivity and sure that's correct

Regards,
david
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11957): https://lists.fd.io/g/vpp-dev/message/11957
Mute This Topic: https://lists.fd.io/mt/29289024/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] NAT handoff mechanism

2018-12-30 Thread david . leitch . vpp
*Hi Damjan*
*Thnaks for your answer,*

*I think it is possible (I encountered this problem) to have deadlock even when 
I use separate CPU for handoff and NAT processing, Or same CPU with congestion 
drop mechanism.*

*I studied the code and separate the CPUs of handoff and NAT processing, in 
this situation I think just in one case deadlock for handoff may be happen*
*suppose if worker A wants to dequeue but elt->valid value is zero so 
vlib_frame_queue_dequeue function return and do not dequeue, as result fq->head 
will never increase:*
 
int vlib_frame_queue_dequeue (vlib_main_t *vm, vlib_frame_queue_main_t *fqm)
{
 
 while (1)
    {
       if (fq->head == fq->tail)
              return processed;
      
      elt = fq->elts + ((fq->head + 1) & (fq->nelts - 1));
 
     *if (!elt->valid)*
        {
            fq->head_hint = fq->head;
            return processed;
        }
 
}

*and on the other side , worker B ( on handoff node) wants to enqueue but ring 
is full so wait :*

static inline vlib_frame_queue_elt_t *
vlib_get_frame_queue_elt (u32 frame_queue_index, u32 index)
{
   ...
   new_tail = __sync_add_and_fetch (>tail, 1);
 
  /* Wait until a ring slot is available */
 while (new_tail >= fq->head_hint + fq->nelts)
      vlib_worker_thread_barrier_check ();
     
}

*therefore worker B never exit from while loop* , *do you think it is possible 
?*
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11803): https://lists.fd.io/g/vpp-dev/message/11803
Mute This Topic: https://lists.fd.io/mt/28878687/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] NAT handoff mechanism

2018-12-28 Thread david . leitch . vpp
Hi ...
 
I know that we need handoff mechanism when running multithread because traffic 
for specific inside network user must be processed always on same thread in 
both directions and We can not remove handoff node from NAT nodes ,because 
handoff is faster than locking mechanism.
 
So the problem is potential dead lock when two workers waiting for each other!! 
 for example Worker A and B, A is going to handoff to B but unfortunately at 
the same time B has the same thing to do to A, then they are both waiting 
forever. your solution In VPP 19.01 (or 18.10) is dropping packets when the 
queue is full (congestion drop)

first question is how you can detect congestion on queue ?

and What happen if we have separate CPUs for Handoff node and nat processing 
node and do not the same job handoff nad natting on single CPU core.
for example CPU core A nodes are ( dpdk-input -> ... -> NAT handoff ) and CPU 
core B nodes are ( nat44-in2out -> ... -> ip4-lookup -> interface-output) , in 
this situation worker A wait for worker B but worker B never wait for worker A.
Is it true to say we never have potential dead lock if we have seprate CPUs ?
If yes, why you use a same single CPU for both nat and handoff ? The separate 
CPUs can not solve the deadlock ?

 
 
what happen if we have seprated CPU core for Handoff nodes and nat processing 
and do not the same job handoff nad natiing on single CPU core.
for exapmle CPU core A nodes are ( dpdk-input -> ... -> NAT handoff ) and CPU 
core B nodes are ( nat44-in2out -> ...  -> ip4-lookup -> interface-output) , in 
this situation worker A wait for worker B but worker B never wait for worker A.
so Is it true to say we never have potential dead lock if we have seprated CPU 
???
If yes, why you use a cpu for both nat and handoff ? seprated CPU can not solve 
the potential dead lock ?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11794): https://lists.fd.io/g/vpp-dev/message/11794
Mute This Topic: https://lists.fd.io/mt/28878687/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Congestion Drop In Handoff

2018-12-23 Thread david . leitch . vpp
Hi
I test VPP NAT plugin performance in NAT44 dynamic translations mode with 40 
CPU core .

at the beginning show error command is look like :

vpp# 
vpp# show errors 
   Count                    Node                  Reason
        15       nat44-out2in-worker-handoff      congestion drop
         5       nat44-in2out-worker-handoff      congestion drop
       414              nat44-out2in              Good out2in packets processed
    537254              nat44-out2in              No translation
     11649          nat44-in2out-slowpath         Good in2out packets processed
    256125              nat44-in2out              Good in2out packets processed
        14                llc-input               unknown llc ssap/dsap
        14       nat44-out2in-worker-handoff      congestion drop
         2       nat44-in2out-worker-handoff      congestion drop
       212              nat44-out2in              Good out2in packets processed
    534239              nat44-out2in              No translation
     11668          nat44-in2out-slowpath         Good in2out packets processed
    257012              nat44-in2out              Good in2out packets processed
        20       nat44-out2in-worker-handoff      congestion drop
         1       nat44-in2out-worker-handoff      congestion drop
       130              nat44-out2in              Good out2in packets processed
    411670              nat44-out2in              No translation
     11692          nat44-in2out-slowpath         Good in2out packets processed
    257286              nat44-in2out              Good in2out packets processed
        10       nat44-out2in-worker-handoff      congestion drop
       144              nat44-out2in              Good out2in packets processed
    320990              nat44-out2in              No translation
     11670          nat44-in2out-slowpath         Good in2out packets processed
    256625              nat44-in2out              Good in2out packets processed
         9       nat44-out2in-worker-handoff      congestion drop
         3       nat44-in2out-worker-handoff      congestion drop
       123              nat44-out2in              Good out2in packets processed
    323996              nat44-out2in              No translation
[ . . . ]

but after a few minutes show error commands show nothing and I just have 
rx-miss on all interface :
vpp# show interface 
              Name               Idx    State  MTU (L3/IP4/IP6/MPLS)     
Counter          Count     
TenGigabitEthernet21/0/0          5      up          9000/0/0/0     rx-miss     
           228482011
TenGigabitEthernet21/0/1          6      up          9000/0/0/0     rx-miss     
            74391436
TenGigabitEthernet24/0/0          7      up          9000/0/0/0     rx-miss     
           231428640
TenGigabitEthernet24/0/1          8      up          9000/0/0/0     rx-miss     
           283173016
TenGigabitEthernet4/0/0           1      up          9000/0/0/0     rx-miss     
            12672280
TenGigabitEthernet4/0/1           2      up          9000/0/0/0     rx-miss     
             6230655
TenGigabitEthernet7/0/0           3      up          9000/0/0/0     rx-miss     
            10559818
TenGigabitEthernet7/0/1           4      up          9000/0/0/0     rx-miss     
             6825264
local0                            0     down          0/0/0/0       

I had this issue on VPP18.04 and thought this is issue when handoff queue is 
congested (multiple workers),and this was fixed in 18.10 but nothing has 
changed.
Is it still a bug in handoff ??

Output of perf top ( just nat44_worker_handoff_fn_inline exist )
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11766): https://lists.fd.io/g/vpp-dev/message/11766
Mute This Topic: https://lists.fd.io/mt/28837511/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Worker Thread Dead Lock on NAT44 IPFIX

2018-12-22 Thread david . leitch . vpp
Hi 
What is difference between "plugins/flowprobe" and "vnet/flow"?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11760): https://lists.fd.io/g/vpp-dev/message/11760
Mute This Topic: https://lists.fd.io/mt/28792277/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Worker Thread Dead Lock on NAT44 IPFIX

2018-12-19 Thread david . leitch . vpp
hi Matus

Thanks for your answer, can you explain more about this issue?
You mean that I should rewrite "nat_ipfix_logging.c"  file to works per thread 
or  "vnet/ipfix-export/flow_report"
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11697): https://lists.fd.io/g/vpp-dev/message/11697
Mute This Topic: https://lists.fd.io/mt/28792277/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Worker Thread Dead Lock on NAT44 IPFIX

2018-12-18 Thread david . leitch . vpp
*hi*
 
*Every time I enable ipfix for NAT, the main thread (vpp_main) is going too 
slowly, which is very hard to enter any command and also drop rate increase 
that all traffic dropped.*
*but I got worker thread deadlock once after some hours.
* *Is it a normal behavior that Every time I enable ipfix for NAT, the main 
thread (vpp_main) is going too slowly ? *
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11687): https://lists.fd.io/g/vpp-dev/message/11687
Mute This Topic: https://lists.fd.io/mt/28792277/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Config NAT plugin for with dynamic translations

2018-12-18 Thread david . leitch . vpp
vpp 18.04 NAT plugin has bug or it does not work ???
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11666): https://lists.fd.io/g/vpp-dev/message/11666
Mute This Topic: https://lists.fd.io/mt/28790237/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Config NAT plugin for with dynamic translations

2018-12-18 Thread david . leitch . vpp
I used VPP18.04
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11658): https://lists.fd.io/g/vpp-dev/message/11658
Mute This Topic: https://lists.fd.io/mt/28790237/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Config NAT plugin for with dynamic translations

2018-12-18 Thread david . leitch . vpp
vpp# show interface rx-placement 
Thread 1 (vpp_wk_0):
  node dpdk-input:
    TenGigabitEthernet4/0/0 queue 0 (polling)
    TenGigabitEthernet4/0/1 queue 0 (polling)
    TenGigabitEthernet7/0/0 queue 0 (polling)
    TenGigabitEthernet7/0/1 queue 0 (polling)
    TenGigabitEthernet21/0/0 queue 0 (polling)
    TenGigabitEthernet21/0/1 queue 0 (polling)
    TenGigabitEthernet24/0/0 queue 0 (polling)
    TenGigabitEthernet24/0/1 queue 0 (polling)
Thread 2 (vpp_wk_1):
  node dpdk-input:
    TenGigabitEthernet4/0/0 queue 1 (polling)
    TenGigabitEthernet4/0/1 queue 1 (polling)
    TenGigabitEthernet7/0/0 queue 1 (polling)
    TenGigabitEthernet7/0/1 queue 1 (polling)
    TenGigabitEthernet21/0/0 queue 1 (polling)
    TenGigabitEthernet21/0/1 queue 1 (polling)
    TenGigabitEthernet24/0/0 queue 1 (polling)
    TenGigabitEthernet24/0/1 queue 1 (polling)
Thread 3 (vpp_wk_2):
  node dpdk-input:
    TenGigabitEthernet4/0/0 queue 2 (polling)
    TenGigabitEthernet4/0/1 queue 2 (polling)
    TenGigabitEthernet7/0/0 queue 2 (polling)
    TenGigabitEthernet7/0/1 queue 2 (polling)
    TenGigabitEthernet21/0/0 queue 2 (polling)
    TenGigabitEthernet21/0/1 queue 2 (polling)
    TenGigabitEthernet24/0/0 queue 2 (polling)
    TenGigabitEthernet24/0/1 queue 2 (polling)
[...]
Thread 38 (vpp_wk_37):
  node dpdk-input:
    TenGigabitEthernet4/0/0 queue 37 (polling)
    TenGigabitEthernet4/0/1 queue 37 (polling)
    TenGigabitEthernet7/0/0 queue 37 (polling)
    TenGigabitEthernet7/0/1 queue 37 (polling)
    TenGigabitEthernet21/0/0 queue 37 (polling)
    TenGigabitEthernet21/0/1 queue 37 (polling)
    TenGigabitEthernet24/0/0 queue 37 (polling)
    TenGigabitEthernet24/0/1 queue 37 (polling)
Thread 39 (vpp_wk_38):
  node dpdk-input:
    TenGigabitEthernet4/0/0 queue 38 (polling)
    TenGigabitEthernet4/0/1 queue 38 (polling)
    TenGigabitEthernet7/0/0 queue 38 (polling)
    TenGigabitEthernet7/0/1 queue 38 (polling)
    TenGigabitEthernet21/0/0 queue 38 (polling)
    TenGigabitEthernet21/0/1 queue 38 (polling)
    TenGigabitEthernet24/0/0 queue 38 (polling)
    TenGigabitEthernet24/0/1 queue 38 (polling)
Thread 40 (vpp_wk_39):
  node dpdk-input:
    TenGigabitEthernet4/0/0 queue 39 (polling)
    TenGigabitEthernet4/0/1 queue 39 (polling)
    TenGigabitEthernet7/0/0 queue 39 (polling)
    TenGigabitEthernet7/0/1 queue 39 (polling)
    TenGigabitEthernet21/0/0 queue 39 (polling)
    TenGigabitEthernet21/0/1 queue 39 (polling)
    TenGigabitEthernet24/0/0 queue 39 (polling)
    TenGigabitEthernet24/0/1 queue 39 (polling)
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11656): https://lists.fd.io/g/vpp-dev/message/11656
Mute This Topic: https://lists.fd.io/mt/28790237/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Config NAT plugin for with dynamic translations

2018-12-18 Thread david . leitch . vpp
# vpp show trace
[...]
--- Start of thread 16 vpp_wk_15 ---
Packet 1
 
00:00:02:897486: dpdk-input
  TenGigabitEthernet4/0/0 rx queue 15
  buffer 0x25507a1: current data 14, length 46, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x0
                    ext-hdr-valid 
                    l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14 
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
    buf_len 2176, data_len 60, ol_flags 0x182, data_off 128, phys_addr 
0xaf01e8c0
    packet_type 0x111 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    Packet Offload Flags
      PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
      PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
      PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
    Packet Types
      RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
      RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
      RTE_PTYPE_L4_TCP (0x0100) TCP packet
  IP4: 38:ea:a7:16:d0:f4 -> 00:1b:21:bc:10:42
  TCP: 16.4.13.55 -> 46.4.13.55
    tos 0x00, ttl 128, length 40, checksum 0x6961
    fragment id 0x78f9
  TCP: 12021 -> 80
    seq. 0x17f0b4e8 ack 0x17f15fe3
    flags 0x10 ACK, tcp header: 20 bytes
    window 32768, checksum 0x636c
00:00:05:215972: ip4-input-no-checksum
  TCP: 16.4.13.55 -> 46.4.13.55
    tos 0x00, ttl 128, length 40, checksum 0x6961
    fragment id 0x78f9
  TCP: 12021 -> 80
    seq. 0x17f0b4e8 ack 0x17f15fe3
    flags 0x10 ACK, tcp header: 20 bytes
    window 32768, checksum 0x636c
00:00:05:216816: nat44-in2out-worker-handoff
  NAT44_IN2OUT_WORKER_HANDOFF: next worker 9
 
Packet 2
 
00:00:02:897486: dpdk-input
  TenGigabitEthernet4/0/0 rx queue 15
  buffer 0x255077a: current data 14, length 46, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x1
                    ext-hdr-valid 
                    l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14 
  PKT MBUF: port 0, nb_segs 1, pkt_len 60
    buf_len 2176, data_len 60, ol_flags 0x182, data_off 128, phys_addr 
0xaf01df00
    packet_type 0x111 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    Packet Offload Flags
      PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
      PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
      PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
    Packet Types
      RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
      RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
      RTE_PTYPE_L4_TCP (0x0100) TCP packet
  IP4: 38:ea:a7:16:d0:f4 -> 00:1b:21:bc:10:42
  TCP: 16.3.251.132 -> 46.3.251.132
    tos 0x00, ttl 128, length 40, checksum 0x8c6c
    fragment id 0x7954
  TCP: 55312 -> 80
    seq. 0x181a46d4 ack 0x181afd8e
    flags 0x10 ACK, tcp header: 20 bytes
    window 32768, checksum 0xadcc
00:00:05:215972: ip4-input-no-checksum
  TCP: 16.3.251.132 -> 46.3.251.132
    tos 0x00, ttl 128, length 40, checksum 0x8c6c
    fragment id 0x7954
  TCP: 55312 -> 80
    seq. 0x181a46d4 ack 0x181afd8e
    flags 0x10 ACK, tcp header: 20 bytes
    window 32768, checksum 0xadcc
00:00:05:216816: nat44-in2out-worker-handoff
  NAT44_IN2OUT_WORKER_HANDOFF: next worker 19
 
Packet 3
 
00:00:02:897486: dpdk-input
  TenGigabitEthernet4/0/0 rx queue 15
  buffer 0x2550753: current data 14, length 1500, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x2
                    ext-hdr-valid 
                    l4-cksum-computed l4-cksum-correct l2-hdr-offset 0 
l3-hdr-offset 14 
  PKT MBUF: port 0, nb_segs 1, pkt_len 1514
    buf_len 2176, data_len 1514, ol_flags 0x182, data_off 128, phys_addr 
0xaf01d540
    packet_type 0x111 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    Packet Offload Flags
      PKT_RX_RSS_HASH (0x0002) RX packet with RSS hash result
      PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
      PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
    Packet Types
      RTE_PTYPE_L2_ETHER (0x0001) Ethernet packet
      RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
      RTE_PTYPE_L4_TCP (0x0100) TCP packet
  IP4: 38:ea:a7:16:d0:f4 -> 00:1b:21:bc:10:42
  TCP: 16.4.13.67 -> 46.4.13.67
    tos 0x00, ttl 128, length 1500, checksum 0x6382
    fragment id 0x790c
  TCP: 24255 -> 80
    seq. 0x17fa1ec0 ack 0x17facb57
    flags 0x18 PSH ACK, tcp header: 20 bytes
    window 32768, checksum 0x3c02
00:00:05:215972: ip4-input-no-checksum
  TCP: 16.4.13.67 -> 46.4.13.67
    tos 0x00, ttl 128, length 1500, checksum 0x6382
    fragment id 0x790c
  TCP: 24255 -> 80
    seq. 0x17fa1ec0 ack 0x17facb57
    flags 0x18 PSH ACK, tcp header: 20 bytes
    window 32768, checksum 0x3c02
00:00:05:216816: nat44-in2out-worker-handoff
  NAT44_IN2OUT_WORKER_HANDOFF: next worker 37
 
Packet 4
 
00:00:02:897486: dpdk-input
  TenGigabitEthernet4/0/0 rx queue 15
  buffer 0x255072c: current data 14, length 292, free-list 0, clone-count 0, 
totlen-nifb 0, trace 0x3
                    ext-hdr-valid 
                    l4-cksum-computed 

Re: [vpp-dev] Config NAT plugin for with dynamic translations

2018-12-18 Thread david . leitch . vpp
show interface command at the beginning :

vpp# show interface 
              Name               Idx       State          Counter          
Count     
TenGigabitEthernet21/0/0          5         up       rx packets                 
 6146
                                                     rx bytes                 
1873378
                                                     drops                      
  608
                                                     ip4                        
 6143
                                                     rx-miss                 
14754129
TenGigabitEthernet21/0/1          6         up       rx packets                 
 6146
                                                     rx bytes                 
4797068
                                                     tx packets                 
  692
                                                     tx bytes                  
222109
                                                     drops                      
 1416
                                                     ip4                        
 6143
                                                     rx-miss                 
17712306
                                                     rx-error                   
    7
                                                     tx-error                   
  605
TenGigabitEthernet24/0/0          7         up       rx packets                 
    9
                                                     rx bytes                   
 1323
                                                     drops                      
    5
TenGigabitEthernet24/0/1          8         up       rx packets                 
    9
                                                     rx bytes                   
 1323
                                                     drops                      
    5
                                                     rx-error                   
    2
TenGigabitEthernet4/0/0           1         up       rx packets                 
 6147
                                                     rx bytes                 
1565056
                                                     drops                      
 1117
                                                     ip4                        
 6144
                                                     rx-miss                  
5757037
TenGigabitEthernet4/0/1           2         up       rx packets                 
 6147
                                                     rx bytes                 
5017096
                                                     tx packets                 
 1021
                                                     tx bytes                  
256661
                                                     drops                      
 2520
                                                     ip4                        
 6144
                                                     rx-miss                   
327524
                                                     tx-error                   
 1114
TenGigabitEthernet7/0/0           3         up       rx packets                 
 6147
                                                     rx bytes                 
1731624
                                                     drops                      
  736
                                                     ip4                        
 6144
                                                     rx-miss                  
5262760
                                                     rx-error                   
    3
TenGigabitEthernet7/0/1           4         up       rx packets                 
 6147
                                                     rx bytes                 
4517837
                                                     tx packets                 
  720
                                                     tx bytes                  
246343
                                                     drops                      
 1681
                                                     ip4                        
 6144
                                                     rx-miss                   
327272
                                                     tx-error                   
  733
local0                            0        down      
vpp# 

after a few second :

vpp# 
vpp# clear interfaces
vpp# 
vpp# show interface  
              Name               Idx       State          Counter          
Count     
TenGigabitEthernet21/0/0          5         up       
TenGigabitEthernet21/0/1          6         up       
TenGigabitEthernet24/0/0          7         up       
TenGigabitEthernet24/0/1          8         up       
TenGigabitEthernet4/0/0           1         up       
TenGigabitEthernet4/0/1           2         up       
TenGigabitEthernet7/0/0           3         up       
TenGigabitEthernet7/0/1           4         up       
local0        

Re: [vpp-dev] Config NAT plugin for with dynamic translations

2018-12-17 Thread david . leitch . vpp
if I config vpp without NAT (just routing) it works , but When Config for NAT I 
have rx-miss on all interfaces.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11647): https://lists.fd.io/g/vpp-dev/message/11647
Mute This Topic: https://lists.fd.io/mt/28790237/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Config NAT plugin for with dynamic translations

2018-12-17 Thread david . leitch . vpp
[Edited Message Follows]

Hi,

I want to test VPP NAT plugin performance in NAT44 dynamic translations mode 
while supporting ipfix logging. (with 4 10gb ethernet interface , 40 CPU core 
and 40gbps throughput)
based on VPP/NAT wiki page I used following configuration :

 
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
  interactive
  exec /root/vpp.cmd
}
 
api-trace {
  on
}
 
api-segment {
  gid vpp
}
 
cpu {
main-core 41
workers 40
}
 
dpdk {
 
dev default {
num-rx-queues 40
num-tx-queues 40
}
 
## Whitelist specific interface by specifying PCI address
dev :04:00.0
 dev :04:00.1
 dev :07:00.0
 dev :07:00.1
 dev :21:00.0
 dev :21:00.1
 dev :24:00.0
 dev :24:00.1
 
num-mbufs 100
socket-mem 16384,16384
 
}
 
heapsize 90G
 
nat {
translation hash buckets 344827
translation hash memory 3498275862
 
user hash buckets 17241
user hash memory 93103448
 
max translations per user 500 
}

and this startup commands :

set int state TenGigabitEthernet4/0/0 up
set int state TenGigabitEthernet4/0/1 up
set int state TenGigabitEthernet7/0/0 up
set int state TenGigabitEthernet7/0/1 up
set int state TenGigabitEthernet21/0/0 up
set int state TenGigabitEthernet21/0/1 up
set int state TenGigabitEthernet24/0/0 up
set int state TenGigabitEthernet24/0/1 up
 
comment{  IPv4 --- }
set int ip address TenGigabitEthernet4/0/0 192.168.20.40/24
set int ip address TenGigabitEthernet4/0/1 192.168.30.40/24
set int ip address TenGigabitEthernet7/0/0 192.168.40.40/24
set int ip address TenGigabitEthernet7/0/1 192.168.50.40/24
set int ip address TenGigabitEthernet21/0/0 192.168.60.40/24
set int ip address TenGigabitEthernet21/0/1 192.168.70.40/24
set int ip address TenGigabitEthernet24/0/0 192.168.100.40/24
set int ip address TenGigabitEthernet24/0/1 192.168.110.40/24
 
comment{  ARP --- }
set ip arp TenGigabitEthernet4/0/0 192.168.20.41 38:ea:a7:16:d0:f4
set ip arp TenGigabitEthernet4/0/1 192.168.30.41 38:ea:a7:16:d0:f5
set ip arp TenGigabitEthernet7/0/0 192.168.40.41 38:ea:a7:16:d2:e0
set ip arp TenGigabitEthernet7/0/1 192.168.50.41 38:ea:a7:16:d2:e1
set ip arp TenGigabitEthernet21/0/0 192.168.60.41 38:ea:a7:16:d3:14
set ip arp TenGigabitEthernet21/0/1 192.168.70.41 38:ea:a7:16:d3:15
set ip arp TenGigabitEthernet24/0/0 192.168.100.41 38:ea:a7:16:d1:18
set ip arp TenGigabitEthernet24/0/1 192.168.110.41 38:ea:a7:16:d1:19
 
comment{  route v4 --- }
ip route add 16.0.0.0/8 via 192.168.20.41
ip route add 46.0.0.0/8 via 192.168.30.41
ip route add 17.0.0.0/8 via 192.168.40.41
ip route add 47.0.0.0/8 via 192.168.50.41
ip route add 18.0.0.0/8 via 192.168.60.41
ip route add 48.0.0.0/8 via 192.168.70.41
ip route add 19.0.0.0/8 via 192.168.100.41
ip route add 49.0.0.0/8 via 192.168.110.41
 
comment{  NAT44 Interface ---}
set interface nat44 in TenGigabitEthernet4/0/0 out TenGigabitEthernet4/0/1
set interface nat44 in TenGigabitEthernet7/0/0 out TenGigabitEthernet7/0/1
set interface nat44 in TenGigabitEthernet21/0/0 out TenGigabitEthernet21/0/1
set interface nat44 in TenGigabitEthernet24/0/0 out TenGigabitEthernet24/0/1

comment{  NAT44 pool address ---}
nat44 add address 16.0.0.1 - 16.0.100.255
nat44 add address 17.0.0.1 - 17.0.100.255
nat44 add address 18.0.0.1 - 18.0.100.255
 
comment{  IPFIX --}
set ipfix exporter collector 192.168.110.111 src 192.168.110.40 path-mtu 100 
template-interval 1
nat ipfix logging

after a few seconds VPP does not any NAT translation and show error show me no 
error :

vpp# show errors 
   Count                    Node                  Reason
      1064          nat44-in2out-slowpath         Good in2out packets processed
      1090              nat44-in2out              Good in2out packets processed
         8               lldp-input               lldp packets received on 
disabled interfaces
        13                llc-input               unknown llc ssap/dsap
      1076     TenGigabitEthernet4/0/1-output     interface is down
      1027          nat44-in2out-slowpath         Good in2out packets processed
      1071              nat44-in2out              Good in2out packets processed
      1065     TenGigabitEthernet4/0/1-output     interface is down
      1038          nat44-in2out-slowpath         Good in2out packets processed
      1077              nat44-in2out              Good in2out packets processed
      1013     TenGigabitEthernet4/0/1-output     interface is down
       997          nat44-in2out-slowpath         Good in2out packets processed
      1029              nat44-in2out              Good in2out packets processed
       996     TenGigabitEthernet4/0/1-output     interface is down
      1012          nat44-in2out-slowpath         Good in2out packets processed
      1032              nat44-in2out              Good in2out packets processed
       951     

[vpp-dev] Config NAT plugin for with dynamic translations

2018-12-17 Thread david . leitch . vpp
Hi,

I want to test VPP NAT plugin performance in NAT44 dynamic translations mode 
while supporting ipfix logging. (with 4 10gb ethernet interface , 40 CPU core 
and 40gbps throughput)
based on VPP/NAT wiki page I used following configuration :

 
unix {
  nodaemon
  log /var/log/vpp/vpp.log
  full-coredump
  cli-listen /run/vpp/cli.sock
  gid vpp
  interactive
  exec /root/vpp.cmd
}
 
api-trace {
  on
}
 
api-segment {
  gid vpp
}
 
cpu {
main-core 41
workers 40
}
 
dpdk {
 
dev default {
num-rx-queues 40
num-tx-queues 40
}
 
## Whitelist specific interface by specifying PCI address
dev :04:00.0
 dev :04:00.1
 dev :07:00.0
 dev :07:00.1
 dev :21:00.0
 dev :21:00.1
 dev :24:00.0
 dev :24:00.1
 
num-mbufs 100
socket-mem 16384,16384
 
}
 
heapsize 90G
 
nat {
translation hash buckets 344827
translation hash memory 3498275862
 
user hash buckets 17241
user hash memory 93103448
 
max translations per user 500 
}

and this startup commands :

set int state TenGigabitEthernet4/0/0 up
set int state TenGigabitEthernet4/0/1 up
set int state TenGigabitEthernet7/0/0 up
set int state TenGigabitEthernet7/0/1 up
set int state TenGigabitEthernet21/0/0 up
set int state TenGigabitEthernet21/0/1 up
set int state TenGigabitEthernet24/0/0 up
set int state TenGigabitEthernet24/0/1 up
 
comment{  IPv4 --- }
set int ip address TenGigabitEthernet4/0/0 192.168.20.40/24
set int ip address TenGigabitEthernet4/0/1 192.168.30.40/24
set int ip address TenGigabitEthernet7/0/0 192.168.40.40/24
set int ip address TenGigabitEthernet7/0/1 192.168.50.40/24
set int ip address TenGigabitEthernet21/0/0 192.168.60.40/24
set int ip address TenGigabitEthernet21/0/1 192.168.70.40/24
set int ip address TenGigabitEthernet24/0/0 192.168.100.40/24
set int ip address TenGigabitEthernet24/0/1 192.168.110.40/24
 
comment{  ARP --- }
set ip arp TenGigabitEthernet4/0/0 192.168.20.41 38:ea:a7:16:d0:f4
set ip arp TenGigabitEthernet4/0/1 192.168.30.41 38:ea:a7:16:d0:f5
set ip arp TenGigabitEthernet7/0/0 192.168.40.41 38:ea:a7:16:d2:e0
set ip arp TenGigabitEthernet7/0/1 192.168.50.41 38:ea:a7:16:d2:e1
set ip arp TenGigabitEthernet21/0/0 192.168.60.41 38:ea:a7:16:d3:14
set ip arp TenGigabitEthernet21/0/1 192.168.70.41 38:ea:a7:16:d3:15
set ip arp TenGigabitEthernet24/0/0 192.168.100.41 38:ea:a7:16:d1:18
set ip arp TenGigabitEthernet24/0/1 192.168.110.41 38:ea:a7:16:d1:19
 
comment{  route v4 --- }
ip route add 16.0.0.0/8 via 192.168.20.41
ip route add 46.0.0.0/8 via 192.168.30.41
ip route add 17.0.0.0/8 via 192.168.40.41
ip route add 47.0.0.0/8 via 192.168.50.41
ip route add 18.0.0.0/8 via 192.168.60.41
ip route add 48.0.0.0/8 via 192.168.70.41
ip route add 19.0.0.0/8 via 192.168.100.41
ip route add 49.0.0.0/8 via 192.168.110.41
 
comment{  NAT44 Interface ---}
set interface nat44 in TenGigabitEthernet4/0/0 out TenGigabitEthernet4/0/1
set interface nat44 in TenGigabitEthernet7/0/0 out TenGigabitEthernet7/0/1
set interface nat44 in TenGigabitEthernet21/0/0 out TenGigabitEthernet21/0/1
set interface nat44 in TenGigabitEthernet24/0/0 out TenGigabitEthernet24/0/1

comment{  NAT44 pool address ---}
nat44 add address 16.0.0.1 - 16.0.100.255
nat44 add address 17.0.0.1 - 17.0.100.255
nat44 add address 18.0.0.1 - 18.0.100.255
 
comment{  IPFIX --}
set ipfix exporter collector 192.168.110.111 src 192.168.110.40 path-mtu 100 
template-interval 1
nat ipfix logging
after a few seconds VPP does not any NAT translation and show error show me no 
error :
vpp# show errors 
   Count                    Node                  Reason
      1064          nat44-in2out-slowpath         Good in2out packets processed
      1090              nat44-in2out              Good in2out packets processed
         8               lldp-input               lldp packets received on 
disabled interfaces
        13                llc-input               unknown llc ssap/dsap
      1076     TenGigabitEthernet4/0/1-output     interface is down
      1027          nat44-in2out-slowpath         Good in2out packets processed
      1071              nat44-in2out              Good in2out packets processed
      1065     TenGigabitEthernet4/0/1-output     interface is down
      1038          nat44-in2out-slowpath         Good in2out packets processed
      1077              nat44-in2out              Good in2out packets processed
      1013     TenGigabitEthernet4/0/1-output     interface is down
       997          nat44-in2out-slowpath         Good in2out packets processed
      1029              nat44-in2out              Good in2out packets processed
       996     TenGigabitEthernet4/0/1-output     interface is down
      1012          nat44-in2out-slowpath         Good in2out packets processed
      1032              nat44-in2out              Good in2out packets processed
       951     TenGigabitEthernet4/0/1-output     

[vpp-dev] Enable an INPUT_NODE for a specific thread or worker

2018-11-27 Thread david . leitch . vpp
hi 
I want to enable an INPUT node just for one worker and disable for others 
(thread main and other workers) like dpdk-input node.
when thet state of INPUT node is set to VLIB_NODE_STATE_POLLING , it is enable 
for all threads, and when it set to VLIB_NODE_STATE_DISABLED it is disable for 
all nodes

tnx
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#11424): https://lists.fd.io/g/vpp-dev/message/11424
Mute This Topic: https://lists.fd.io/mt/28369007/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Memory Performance issue #vpp

2018-10-24 Thread david . leitch . vpp
[Edited Message Follows]

Do you mean it is impossible to have packet processing and memory operation at 
the same time,
for example doing vec_validate or vec_free when NAT plugin is working and 
create new session.

I have drop rate when vec_free or vec_vlidate for memory size greater than 3GB. 

What are your suggestions for such problems?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10959): https://lists.fd.io/g/vpp-dev/message/10959
Mute This Topic: https://lists.fd.io/mt/27615950/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Memory Performance issue #vpp

2018-10-24 Thread david . leitch . vpp
Do you mean it is impossible to have packet processing and memory operation at 
the same time,
for example doing vec_validate or vec_free when NAT plugging is working and 
create new session.

I have drop rate when vec_free or vec_vlidate for memory size greater than 3GB. 

What are your suggestions for such problems?
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10959): https://lists.fd.io/g/vpp-dev/message/10959
Mute This Topic: https://lists.fd.io/mt/27615950/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] Memory Performance issue #vpp

2018-10-24 Thread david . leitch . vpp
hi Matus

I know it will take some time when add new deterministic mapping.
But why "show memory" command has drop rate 
Or why adding new deterministic mapping cause drop rate

Thanks !
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10945): https://lists.fd.io/g/vpp-dev/message/10945
Mute This Topic: https://lists.fd.io/mt/27615950/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Memory Performance issue #vpp

2018-10-24 Thread david . leitch . vpp
Hi,
 
I tested VPP performance for CGNAT in Deterministic mode; While VPP is working 
and has sessions, "show memory" command causes huge drop rates or when I want 
to add another deterministic mapping it takes long delay (for memory 
allocation) and again huge drop rate occurs that relate to memory allocation.
 
First, I thought drop rate was for main heap memory locking, so I changed 
source code and comment mheap_maybe_lock and mheap_maybe_unlock on mheap_usage 
command, but when the main heap is busy I have drop rate for incoming traffic. 
:(
 
Is it normal behavior Or it is performance problem for memory?
 
Thanks!
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#10941): https://lists.fd.io/g/vpp-dev/message/10941
Mute This Topic: https://lists.fd.io/mt/27615950/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


[vpp-dev] Using Ncurses library in VPP CLI

2018-07-04 Thread david . leitch . vpp
Hi,
Is it possible to use Ncurses library function on VPP CLI Command to have nice 
view,
i used  a basic sample but does not work, compile with -lncurses and use this 
commands :
 
 initscr();                     /* Start curses mode              */ 
 printw("Hello World !!!");     /* Print Hello World              */
 refresh();                     /* Print it on to the real screen */
 getch();                       /* Wait for user input            */
 endwin();                      /* End curses mode                */
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9776): https://lists.fd.io/g/vpp-dev/message/9776
Mute This Topic: https://lists.fd.io/mt/23037153/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-


Re: [vpp-dev] vec_validate Long Delay #vpp

2018-06-18 Thread david . leitch . vpp
Hi dave
 
I am using 64 bit image ( -DCLIB_VEC64=1 )  and as I said, I can get vectors 
with large size (more than 5GB)
My problem is not the size of vector it is time delay for allocation.
when I using vec_validate to allocate huge memory size it has a very very long 
delay to response,
I don't get core dump for loss of memory but take a very long time, I want to 
reduce or eliminate this 
time delay for memory allocation and before memory allocation how can I 
calculate memory usage of vector

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9637): https://lists.fd.io/g/vpp-dev/message/9637
Mute This Topic: https://lists.fd.io/mt/22400853/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] vec_validate Long Delay #vpp

2018-06-18 Thread david . leitch . vpp
Hi,
 
I have a problem when using *vec_validate* to allocate large memory size ( more 
than 5G ) , I am sure that i have enough memory on my system,
but vec_validate take so long time to allocate memory how can i decrease this 
delay time for allocate memory and how can calculate memory size
before allocate by vec_validate to ensure that system never get Core dump for 
loss of memory .

Thanks in advance

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#9635): https://lists.fd.io/g/vpp-dev/message/9635
Mute This Topic: https://lists.fd.io/mt/22400853/21656
Mute #vpp: https://lists.fd.io/mk?hashtag=vpp=1480452
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-