Here is the gdb vpp core output

(gdb) f



#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51



51      in ../sysdeps/unix/sysv/linux/raise.c



(gdb) bt



#0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51



#1  0x00007f811edf2801 in __GI_abort () at abort.c:79
#2  0x000055edb4f0ad44 in os_exit ()
#3  0x00007f811f6f53d9 in ?? () from
/usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
#4  <signal handler called>
#5  0x00007f811f6db793 in ?? () from
/usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
#6  0x00007f811f6df4d9 in ?? () from
/usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
#7  0x00007f811f6a4eee in vlib_call_init_exit_functions () from
/usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
#8  0x00007f811f6b5d17 in vlib_main () from
/usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
#9  0x00007f811f6f4416 in ?? () from
/usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
#10 0x00007f811f1cb834 in clib_calljmp () from
/usr/lib/x86_64-linux-gnu/libvppinfra.so.19.08.1
#11 0x00007ffe3cb01770 in ?? ()
#12 0x00007f811f6f586f in vlib_unix_main () from
/usr/lib/x86_64-linux-gnu/libvlib.so.19.08.1
#13 0x3de8f63100000400 in ?? ()
#14 0x8100458b49ffd49d in ?? ()
#15 0x8b4800000100f868 in ?? ()
#16 0x6d8141fffffb6085 in ?? ()
#17 0x408b480000010008 in ?? ()
#18 0x480000000000c740 in ?? ()

let me know more specific steps to pinpoint the issues.

On Mon, Nov 4, 2019 at 10:13 AM Damjan Marion <[email protected]> wrote:

> i remember doing corelist-workers  with > 50 cores…..
>
> If you paste traceback we may have better clue what is wrong…
>
> —
> Damjan
>
>
>
> On 4 Nov 2019, at 18:51, Chuan Han via Lists.Fd.Io <
> [email protected]> wrote:
>
> All even number cores are on numa 0, which also hosts all nics.
>
> It seems corelist-workers can only take maximum 8 cores.
>
> On Mon, Nov 4, 2019 at 9:45 AM Tkachuk, Georgii <[email protected]>
> wrote:
>
>> Hi Chuan,  are cores 20 and 22 on socket0 or socket1? If they are on
>> socket1, the application is crashing because the aesni_mb driver is
>> pointing to socket0: vdev crypto_aesni_mb0,socket_id=0.
>>
>>
>>
>> George
>>
>>
>>
>> *From:* [email protected] <[email protected]> *On Behalf Of *Chuan
>> Han via Lists.Fd.Io <http://lists.fd.io/>
>> *Sent:* Monday, November 04, 2019 10:27 AM
>> *To:* vpp-dev <[email protected]>
>> *Cc:* [email protected]
>> *Subject:* [vpp-dev] Is there a limit when assigning corelist-workers in
>> vpp?
>>
>>
>>
>> Hi, vpp experts,
>>
>>
>>
>> I am trying to allocate more cores to a phy nic. I want to allocate cores
>> 4,6,8,10 to eth0, and cores 12,14,16,18 to eth1.
>>
>>
>>
>> cpu {
>>   main-core 2
>> *  # corelist-workers 4,6,8,10,12,14,16,18,20,22   <== This does not
>> work. vpp crashes when starting. *
>>   corelist-workers 4,6,8,10,12,14,16,18
>> }
>>
>> dpdk {
>>   socket-mem 2048,0
>>   log-level debug
>>   no-tx-checksum-offload
>>   dev default{
>>     num-tx-desc 512
>>     num-rx-desc 512
>>   }
>>   dev 0000:1a:00.0 {
>>     # workers 4,6,8,10,12
>>     workers 4,6,8,10
>>     name eth0
>>   }
>>   dev 0000:19:00.1 {
>>     # workers 14,16,18,20,22
>>     workers 12,14,16,18
>>     name eth1
>>   }
>>   # Use aesni mb lib.
>>   vdev crypto_aesni_mb0,socket_id=0
>>   # Use qat VF pcie addresses.
>> #  dev 0000:3d:01.0
>>   no-multi-seg
>> }
>>
>>
>>
>> Afte vpp starts, I can see eth1 got 4 cores but eth0 only got 3 cores.
>>
>>
>>
>> vpp# sh thread
>> ID     Name                Type        LWP     Sched Policy (Priority)
>>  lcore  Core   Socket State
>> 0      vpp_main                        10653   other (0)                2
>>      0      0
>> 1      vpp_wk_0            workers     10655   other (0)                4
>>      1      0
>> 2      vpp_wk_1            workers     10656   other (0)                6
>>      4      0
>> 3      vpp_wk_2            workers     10657   other (0)                8
>>      2      0
>> 4      vpp_wk_3            workers     10658   other (0)
>>  10     3      0
>> 5      vpp_wk_4            workers     10659   other (0)
>>  12     8      0
>> 6      vpp_wk_5            workers     10660   other (0)
>>  14     13     0
>> 7      vpp_wk_6            workers     10661   other (0)
>>  16     9      0
>> *8      vpp_wk_7            workers     10662   other (0)
>>  18     12     0   <=== core 18 is not used by eth0.*
>> vpp# sh interface rx-placement
>> Thread 1 (vpp_wk_0):
>>   node dpdk-input:
>>     eth1 queue 0 (polling)
>> Thread 2 (vpp_wk_1):
>>   node dpdk-input:
>>     eth1 queue 1 (polling)
>> Thread 3 (vpp_wk_2):
>>   node dpdk-input:
>>     eth1 queue 2 (polling)
>> Thread 4 (vpp_wk_3):
>>   node dpdk-input:
>>     eth1 queue 3 (polling)
>>
>>
>>
>> *Thread 5 (vpp_wk_4):   node dpdk-input:     eth0 queue 0 (polling)
>> eth0 queue 2 (polling)*
>> Thread 6 (vpp_wk_5):
>>   node dpdk-input:
>>     eth0 queue 3 (polling)
>> Thread 7 (vpp_wk_6):
>>   node dpdk-input:
>>     eth0 queue 1 (polling)
>> vpp#
>>
>>
>>
>> It seems there is a limitation on assigning cores to nic.
>>
>> 1. I cannot allocate cores after 20 to corelist-workers.
>>
>> 2. Cores after 18 cannot be allocated to nic.
>>
>>
>>
>> Is this some bug? Or, some undocumented limitation?
>>
>>
>>
>> Thanks.
>>
>> Chuan
>>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14491): https://lists.fd.io/g/vpp-dev/message/14491
> Mute This Topic: https://lists.fd.io/mt/41334952/675642
> Group Owner: [email protected]
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14494): https://lists.fd.io/g/vpp-dev/message/14494
Mute This Topic: https://lists.fd.io/mt/41334952/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to