Re: 35640 has a crashing regression, was Re: [vpp-dev] vpp-papi stats is broken

2022-06-03 Thread Pim van Pelt
PTAL  https://gerrit.fd.io/r/c/vpp/+/36334



On Fri, Jun 3, 2022 at 10:25 PM Pim van Pelt via lists.fd.io  wrote:

> Hi Damjan,
>
> Just a quick note - 22.06 still has this regression
>
> 1: /home/pim/src/vpp/src/vlib/drop.c:77 (counter_index) assertion `ci <
> n->n_errors' fails
>
>
> Is a reasonable fix for this seeing to it that the ASSERT here returns
> NULL instead and the two call sites in L95, L224 become tolerant of that?
>
> On Thu, Apr 7, 2022 at 3:11 PM Damjan Marion (damarion) <
> damar...@cisco.com> wrote:
>
>>
>> Yeah, looks like ip4_neighbor_probe is sending packet to deleted
>> interface:
>>
>> (gdb)p n->name
>> $4 = (u8 *) 0x7fff82b47578 "interface-3-output-deleted”
>>
>> So it is right that this assert kicks in.
>>
>> Likely what happens is that batch of commands are first triggering
>> generation of neighbor probe packet, then
>> immediately after that interface is deleted, but packet is still in
>> flight and drop node tries to bump counters for deleted interface.
>>
>> —
>> Damjan
>>
>>
>>
>> > On 06.04.2022., at 16:21, Pim van Pelt  wrote:
>> >
>> > Hoi,
>> >
>> > Following reproduces the drop.c:77 assertion:
>> >
>> > create loopback interface instance 0
>> > set interface ip address loop0 10.0.0.1/32
>> > set interface state GigabitEthernet3/0/1 up
>> > set interface state loop0 up
>> > set interface state loop0 down
>> > set interface ip address del loop0 10.0.0.1/32
>> > delete loopback interface intfc loop0
>> > set interface state GigabitEthernet3/0/1 down
>> > set interface state GigabitEthernet3/0/1 up
>> > comment { the following crashes VPP }
>> > set interface state GigabitEthernet3/0/1 down
>> >
>> > I found that adding IPv6 addresses does not provoke the crash, while
>> adding IPv4 addresses to loop0 does provoke it.
>> >
>> > groet,
>> > Pim
>> >
>> > On Wed, Apr 6, 2022 at 3:56 PM Pim van Pelt via lists.fd.io > ipng...@lists.fd.io> wrote:
>> > Hoi,
>> >
>> > The crash I observed is now gone, thanks!
>> >
>> > VPP occasionally hits an ASSERT related to error counters at drop.c:77
>> -- I'll try to see if I can get a reproduction, but it may take a while,
>> and it may be transient.
>> >
>> > 11: /home/pim/src/vpp/src/vlib/drop.c:77 (counter_index) assertion `ci
>> < n->n_errors' fails
>> >
>> > Thread 14 "vpp_wk_11" received signal SIGABRT, Aborted.
>> > [Switching to Thread 0x7fff4bbfd700 (LWP 182685)]
>> > __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
>> > 50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
>> > (gdb) bt
>> > #0  __GI_raise (sig=sig@entry=6) at
>> ../sysdeps/unix/sysv/linux/raise.c:50
>> > #1  0x76a5f859 in __GI_abort () at abort.c:79
>> > #2  0x004072e3 in os_panic () at
>> /home/pim/src/vpp/src/vpp/vnet/main.c:413
>> > #3  0x76daea29 in debugger () at
>> /home/pim/src/vpp/src/vppinfra/error.c:84
>> > #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
>> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
>> > at /home/pim/src/vpp/src/vppinfra/error.c:143
>> > #5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at
>> /home/pim/src/vpp/src/vlib/drop.c:77
>> > #6  0x76f77c57 in process_drop_punt (vm=0x7fffa09fb2c0,
>> node=0x7fffa0c79b00, frame=0x7fff97168140,
>> disposition=ERROR_DISPOSITION_DROP)
>> > at /home/pim/src/vpp/src/vlib/drop.c:224
>> > #7  0x76f77957 in error_drop_node_fn_hsw (vm=0x7fffa09fb2c0,
>> node=0x7fffa0c79b00, frame=0x7fff97168140)
>> > at /home/pim/src/vpp/src/vlib/drop.c:248
>> > #8  0x76f0b10d in dispatch_node (vm=0x7fffa09fb2c0,
>> node=0x7fffa0c79b00, type=VLIB_NODE_TYPE_INTERNAL,
>> > dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fff97168140,
>> last_time_stamp=5318787653101516) at /home/pim/src/vpp/src/vlib/main.c:961
>> > #9  0x76f0bb60 in dispatch_pending_node (vm=0x7fffa09fb2c0,
>> pending_frame_index=5, last_time_stamp=5318787653101516)
>> > at /home/pim/src/vpp/src/vlib/main.c:1120
>> > #10 0x76f06e0f in vlib_main_or_worker_loop (vm=0x7fffa09fb2c0,
>> is_main=0) at /home/pim/src/vpp/src/vlib/main.c:1587
>> > #11 0x76f06537 in vlib_worker_loop (vm=0x7fffa09fb2c0) at
>> /home/pim/src/vpp/src/vlib/main.c:1721
>> > #12 0x76f44ef4 in vlib_worker_thread_fn (arg=0x7fff98eabec0) at
>> /home/pim/src/vpp/src/vlib/threads.c:1587
>> > #13 0x76f3ffe5 in vlib_worker_thread_bootstrap_fn
>> (arg=0x7fff98eabec0) at /home/pim/src/vpp/src/vlib/threads.c:426
>> > #14 0x76e61609 in start_thread (arg=) at
>> pthread_create.c:477
>> > #15 0x76b5c163 in clone () at
>> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
>> > (gdb) up 4
>> > #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
>> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
>> > at /home/pim/src/vpp/src/vppinfra/error.c:143
>> > 143 debugger ();
>> > (gdb) up
>> > #5  0x76f782d9 in counter_index 

Re: 35640 has a crashing regression, was Re: [vpp-dev] vpp-papi stats is broken

2022-06-03 Thread Pim van Pelt
Hi Damjan,

Just a quick note - 22.06 still has this regression

1: /home/pim/src/vpp/src/vlib/drop.c:77 (counter_index) assertion `ci <
n->n_errors' fails


Is a reasonable fix for this seeing to it that the ASSERT here returns NULL
instead and the two call sites in L95, L224 become tolerant of that?

On Thu, Apr 7, 2022 at 3:11 PM Damjan Marion (damarion) 
wrote:

>
> Yeah, looks like ip4_neighbor_probe is sending packet to deleted interface:
>
> (gdb)p n->name
> $4 = (u8 *) 0x7fff82b47578 "interface-3-output-deleted”
>
> So it is right that this assert kicks in.
>
> Likely what happens is that batch of commands are first triggering
> generation of neighbor probe packet, then
> immediately after that interface is deleted, but packet is still in flight
> and drop node tries to bump counters for deleted interface.
>
> —
> Damjan
>
>
>
> > On 06.04.2022., at 16:21, Pim van Pelt  wrote:
> >
> > Hoi,
> >
> > Following reproduces the drop.c:77 assertion:
> >
> > create loopback interface instance 0
> > set interface ip address loop0 10.0.0.1/32
> > set interface state GigabitEthernet3/0/1 up
> > set interface state loop0 up
> > set interface state loop0 down
> > set interface ip address del loop0 10.0.0.1/32
> > delete loopback interface intfc loop0
> > set interface state GigabitEthernet3/0/1 down
> > set interface state GigabitEthernet3/0/1 up
> > comment { the following crashes VPP }
> > set interface state GigabitEthernet3/0/1 down
> >
> > I found that adding IPv6 addresses does not provoke the crash, while
> adding IPv4 addresses to loop0 does provoke it.
> >
> > groet,
> > Pim
> >
> > On Wed, Apr 6, 2022 at 3:56 PM Pim van Pelt via lists.fd.io  ipng...@lists.fd.io> wrote:
> > Hoi,
> >
> > The crash I observed is now gone, thanks!
> >
> > VPP occasionally hits an ASSERT related to error counters at drop.c:77
> -- I'll try to see if I can get a reproduction, but it may take a while,
> and it may be transient.
> >
> > 11: /home/pim/src/vpp/src/vlib/drop.c:77 (counter_index) assertion `ci <
> n->n_errors' fails
> >
> > Thread 14 "vpp_wk_11" received signal SIGABRT, Aborted.
> > [Switching to Thread 0x7fff4bbfd700 (LWP 182685)]
> > __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
> > 50  ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
> > (gdb) bt
> > #0  __GI_raise (sig=sig@entry=6) at
> ../sysdeps/unix/sysv/linux/raise.c:50
> > #1  0x76a5f859 in __GI_abort () at abort.c:79
> > #2  0x004072e3 in os_panic () at
> /home/pim/src/vpp/src/vpp/vnet/main.c:413
> > #3  0x76daea29 in debugger () at
> /home/pim/src/vpp/src/vppinfra/error.c:84
> > #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
> > at /home/pim/src/vpp/src/vppinfra/error.c:143
> > #5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at
> /home/pim/src/vpp/src/vlib/drop.c:77
> > #6  0x76f77c57 in process_drop_punt (vm=0x7fffa09fb2c0,
> node=0x7fffa0c79b00, frame=0x7fff97168140,
> disposition=ERROR_DISPOSITION_DROP)
> > at /home/pim/src/vpp/src/vlib/drop.c:224
> > #7  0x76f77957 in error_drop_node_fn_hsw (vm=0x7fffa09fb2c0,
> node=0x7fffa0c79b00, frame=0x7fff97168140)
> > at /home/pim/src/vpp/src/vlib/drop.c:248
> > #8  0x76f0b10d in dispatch_node (vm=0x7fffa09fb2c0,
> node=0x7fffa0c79b00, type=VLIB_NODE_TYPE_INTERNAL,
> > dispatch_state=VLIB_NODE_STATE_POLLING, frame=0x7fff97168140,
> last_time_stamp=5318787653101516) at /home/pim/src/vpp/src/vlib/main.c:961
> > #9  0x76f0bb60 in dispatch_pending_node (vm=0x7fffa09fb2c0,
> pending_frame_index=5, last_time_stamp=5318787653101516)
> > at /home/pim/src/vpp/src/vlib/main.c:1120
> > #10 0x76f06e0f in vlib_main_or_worker_loop (vm=0x7fffa09fb2c0,
> is_main=0) at /home/pim/src/vpp/src/vlib/main.c:1587
> > #11 0x76f06537 in vlib_worker_loop (vm=0x7fffa09fb2c0) at
> /home/pim/src/vpp/src/vlib/main.c:1721
> > #12 0x76f44ef4 in vlib_worker_thread_fn (arg=0x7fff98eabec0) at
> /home/pim/src/vpp/src/vlib/threads.c:1587
> > #13 0x76f3ffe5 in vlib_worker_thread_bootstrap_fn
> (arg=0x7fff98eabec0) at /home/pim/src/vpp/src/vlib/threads.c:426
> > #14 0x76e61609 in start_thread (arg=) at
> pthread_create.c:477
> > #15 0x76b5c163 in clone () at
> ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
> > (gdb) up 4
> > #4  0x76dae7fa in _clib_error (how_to_die=2, function_name=0x0,
> line_number=0, fmt=0x76f9d19c "%s:%d (%s) assertion `%s' fails")
> > at /home/pim/src/vpp/src/vppinfra/error.c:143
> > 143 debugger ();
> > (gdb) up
> > #5  0x76f782d9 in counter_index (vm=0x7fffa09fb2c0, e=3416) at
> /home/pim/src/vpp/src/vlib/drop.c:77
> > 77ASSERT (ci < n->n_errors);
> > (gdb) list
> > 72
> > 73ni = vlib_error_get_node (>node_main, e);
> > 74n = vlib_get_node (vm, ni);
> > 75
> > 76ci = 

Re: [vpp-dev] #ipsec #dpdk VPP 22.02 no DPDK crypto devs available on AWS VM

2022-06-03 Thread Matthew Smith via lists.fd.io
Hi Gabi,

It looks like aesni_mb and aesni_gcm are disabled in VPP's DPDK build
configuration. see build/external/packages/dpdk.mk. You would probably need
to remove those from DPDK_DRIVERS_DISABLED and rebuild if you want to use
them. That said, I doubt you would see much improvement as a result of
using them. VPP's ipsecmb crypto plugin uses the same optimized crypto
library that those vdev's use. I think VPP's native crypto plugin is
assigned the highest priority, so that plugin is likely handling crypto
operations for your tunnels by default. If you want to use the ipsecmb
crypto plugin instead you can use a command like  "vppctl set crypto
handler  ipsecmb" for the ciphers used by your tunnels. I don't
know if you'll see any difference in performance by using ipsecmb instead
of native, but it doesn't hurt to try it.

Here are some thoughts and questions on tuning to improve IPsec throughput:

   - If you haven't already, you should configure at least one worker
   thread so your crypto operations are not being executed on the same CPU as
   the main thread.
   - Are you using one tunnel or multiple tunnels? An SA will be bound to a
   particular thread in order to keep packets in order. With synchronous
   crypto, all of the operations for the SA will be handled by that one thread
   and throughput will be limited to how much crypto the CPU that thread is
   bound to can handle. So you might get higher throughput by distributing
   traffic across multiple tunnels if possible. Or if you enable asynchronous
   crypto, the sw_scheduler plugin tries to distribute crypto operations to
   other threads, which might help.
   - With multiple workers, you could get encrypt & decrypt operations
   handled by different threads/cores. If you have a LAN interface and a WAN
   interface and your tunnel is terminated on the WAN interface to allow VMs
   on your LAN subnet to communicate with some remote systems on the other
   side of the tunnel, you could bind the RX queues for the interfaces to
   different threads. Outbound packets would be encrypted by the threads which
   handle the queues for the LAN interface. Inbound packets will be decrypted
   by the threads which handle the queues for the WAN interface.
   - You mentioned that you can't get better throughput from VPP than you
   can with kernel IPsec. Is the kernel getting the same throughput as VPP or
   higher? If it's close to the same, you may be hitting some external
   resource limit. E.g. the other end of the tunnel could be the bottleneck.
   Or AWS's traffic shaping might be preventing you from sending any faster.
   - Are you using policy-based IPsec or routed IPsec (creating a tunnel
   interface)? There have been patches merged recently which are intended to
   improve performance for policy-based IPsec, but if you are using
   policy-based IPsec you might try using a tunnel interface instead and see
   if your measurements improve.
   - Fragmentation and reassembly can impact IPsec throughput. If your
   packets are close to the size of the hardware interface that packets will
   be sent out, the encapsulation & crypto padding may push the packet size
   over the MTU and the encrypted packet may need to be fragmented before
   being sent. That means the other end of the tunnel will need to wait for
   all the fragments to arrive and reassemble them before it can decrypt the
   packet. If you are using a tunnel interface, you can set the MTU on the
   tunnel interface lower than the MTU on the hardware interface. Then packets
   would be fragmented by the tunnel interface before being encrypted and the
   other end would not need to reassemble them.

-Matt


On Fri, Jun 3, 2022 at 7:52 AM  wrote:

> Hi,
> I am a beginner in VPP and DPDK stuff, I am trying to create a high
> performance AWS VM which should do IPSec tunneling.
>
> The IPSEc traffic is running well, but I can not exceed 8Gb  traffic
> throughput and I can not convince VPP to beat the "ip xfrm" in terms of
> IPSec traffic throughput.
>
> When the VPP starts, I get this warning all the times:
>
> dpdk/cryptodev [warn  ]: dpdk_cryptodev_init: Not enough cryptodev
> resources
>
> whatever CPU I have enabled.
>
> If I specify
> vdev crypto_aesni_mb
> or
> vdev crypto_aesni_gcm
> on the dpdk section of startup.conf file, I always hit this error:
> 0: dpdk_config: rte_eal_init returned -1
>
> I am using Ubuntu 20.04 LTS and the CPU flags are:
>
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat
> pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm
> constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid
> aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1
> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand
> hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust
> bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx
> smap clflushopt clwb 

Re: [vpp-dev] api msg deadlock

2022-06-03 Thread Florin Coras
Hi Wanghe, 

The only api bindings supported today are c, python and golang. Maybe somebody 
familiar with the jvpp code can help you out but otherwise I’d recommend to 
switch if possible. 

Regards,
Florin

> On Jun 3, 2022, at 7:55 AM, NUAA无痕  wrote:
> 
> Hi, florin
> 
> About this question, i compare c++ code with jvpp code, then i found that 
> jvpp maybe have a bug and even if update vpp also cannot resolve it
> 
> jvpp code according to vpp version 1901, that has jvpp example
> vpp-1901/extras/japi/java/jvpp-core/io/fd/vpp/jvpp/core/examples/CreateSubInterfaceExample.java
>   has jvpp example and our code according to it
> 
> now vpp version is 2101
> then when java code connected to vpp and then use "close“ function it will 
> hint "peer unresponsive, give up" 
> this error from src/vlibmemory/memory_client.c vl_client_disconnect function
> 
> why this error is that svm_queue_sub always return -2 until timeout
> 
> this is code , the reason is that "vl_input_queue->cursize == 0 " and 
> vl_input_queue->head == vl_input_queue->tail
> 
> int
> vl_client_disconnect (void)
> {
>   vl_api_memclnt_delete_reply_t *rp;
>   svm_queue_t *vl_input_queue;
>   api_main_t *am = vlibapi_get_main ();
>   time_t begin;
> 
>   vl_input_queue = am->vl_input_queue;
>   vl_client_send_disconnect (0 /* wait for reply */ );
> 
>   /*
>* Have to be careful here, in case the client is disconnecting
>* because e.g. the vlib process died, or is unresponsive.
>*/
>   begin = time (0);
>   while (1)
> {
>   time_t now;
> 
>   now = time (0);
> 
>   if (now >= (begin + 2))
> {
>  clib_warning ("peer unresponsive, give up");
>  am->my_client_index = ~0;
>  am->my_registration = 0;
>  am->shmem_hdr = 0;
>  return -1;
> }
> 
> /* this error because vl_input_queue->cursize == 0  */
>   if (svm_queue_sub (vl_input_queue, (u8 *) & rp, SVM_Q_NOWAIT, 0) < 0)
> continue;
> 
>   VL_MSG_API_UNPOISON (rp);
> 
>   /* drain the queue */
>   if (ntohs (rp->_vl_msg_id) != VL_API_MEMCLNT_DELETE_REPLY)
> {
>  clib_warning ("queue drain: %d", ntohs (rp->_vl_msg_id));
>  vl_msg_api_handler ((void *) rp);
>  continue;
> }
>   vl_msg_api_handler ((void *) rp);
>   break;
> }
> 
>   vl_api_name_and_crc_free ();
>   return 0;
> }
> 
> when i use c++ for vpp binary api,  vl_input_queue->cursize == 1 and 
> vl_input_queue->head != vl_input_queue->tail
> 
> so c++ use binary api is correct that about svm_queue_* series functions
> 
> Although JVpp is no longer supported, but this is important for me!
> 
> Can you give a patch for jvpp? Thanks
> 
> Best regards
> 
> Wanghe
> 
> 


-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21501): https://lists.fd.io/g/vpp-dev/message/21501
Mute This Topic: https://lists.fd.io/mt/91372330/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] api msg deadlock

2022-06-03 Thread NUAA无痕
Hi, florin

About this question, i compare c++ code with jvpp code, then i found that
jvpp maybe have a bug and even if update vpp also cannot resolve it

jvpp code according to vpp version 1901, that has jvpp example
vpp-1901/extras/japi/java/jvpp-core/io/fd/vpp/jvpp/core/examples/CreateSubInterfaceExample.java
has jvpp example and our code according to it

now vpp version is 2101
then when java code connected to vpp and then use "close“ function it will
hint "peer unresponsive, give up"
this error from src/vlibmemory/memory_client.c vl_client_disconnect function

why this error is that svm_queue_sub always return -2 until timeout

this is code , the reason is that "vl_input_queue->cursize == 0 " and
vl_input_queue->head == vl_input_queue->tail

int
vl_client_disconnect (void)
{
  vl_api_memclnt_delete_reply_t *rp;
  svm_queue_t *vl_input_queue;
  api_main_t *am = vlibapi_get_main ();
  time_t begin;

  vl_input_queue = am->vl_input_queue;
  vl_client_send_disconnect (0 /* wait for reply */ );

  /*
   * Have to be careful here, in case the client is disconnecting
   * because e.g. the vlib process died, or is unresponsive.
   */
  begin = time (0);
  while (1)
{
  time_t now;

  now = time (0);

  if (now >= (begin + 2))
{
 clib_warning ("peer unresponsive, give up");
 am->my_client_index = ~0;
 am->my_registration = 0;
 am->shmem_hdr = 0;
 return -1;
}

/* this error because vl_input_queue->cursize == 0  */
  if (svm_queue_sub (vl_input_queue, (u8 *) & rp, SVM_Q_NOWAIT, 0) < 0)
continue;

  VL_MSG_API_UNPOISON (rp);

  /* drain the queue */
  if (ntohs (rp->_vl_msg_id) != VL_API_MEMCLNT_DELETE_REPLY)
{
 clib_warning ("queue drain: %d", ntohs (rp->_vl_msg_id));
 vl_msg_api_handler ((void *) rp);
 continue;
}
  vl_msg_api_handler ((void *) rp);
  break;
}

  vl_api_name_and_crc_free ();
  return 0;
}

when i use c++ for vpp binary api,  vl_input_queue->cursize == 1 and
vl_input_queue->head != vl_input_queue->tail

so c++ use binary api is correct that about svm_queue_* series functions

Although JVpp is no longer supported, but this is important for me!

Can you give a patch for jvpp? Thanks

Best regards

Wanghe

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21500): https://lists.fd.io/g/vpp-dev/message/21500
Mute This Topic: https://lists.fd.io/mt/91372330/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] #ipsec #dpdk VPP 22.02 no DPDK crypto devs available on AWS VM

2022-06-03 Thread gv . florian
Hi,
I am a beginner in VPP and DPDK stuff, I am trying to create a high performance 
AWS VM which should do IPSec tunneling.

The IPSEc traffic is running well, but I can not exceed 8Gb  traffic throughput 
and I can not convince VPP to beat the "ip xfrm" in terms of IPSec traffic 
throughput.

When the VPP starts, I get this warning all the times:

dpdk/cryptodev     [warn  ]: dpdk_cryptodev_init: Not enough cryptodev resources

whatever CPU I have enabled.

If I specify
vdev crypto_aesni_mb
or
vdev crypto_aesni_gcm
on the dpdk section of startup.conf file, I always hit this error:
0: dpdk_config: rte_eal_init returned -1

I am using Ubuntu 20.04 LTS and the CPU flags are:

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 
clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc 
arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf 
tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic 
movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm 
abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 hle avx2 smep 
bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb 
avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke

Can somebody tell me what I am missing? or how can I find the right 
configuration?

Thank you a lot,
Gabi

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21499): https://lists.fd.io/g/vpp-dev/message/21499
Mute This Topic: https://lists.fd.io/mt/91520137/21656
Mute #ipsec:https://lists.fd.io/g/vpp-dev/mutehashtag/ipsec
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-