I was able to manually fix the nic queue assignment by issuing this command.
set interface rx-placement eth0 queue 2 worker 7
vpp# sh interface rx-placement
Thread 1 (vpp_wk_0):
node dpdk-input:
eth1 queue 0 (polling)
Thread 2 (vpp_wk_1):
node dpdk-input:
eth1 queue 1 (polling)
Thread 3 (vpp_wk_2):
node dpdk-input:
eth1 queue 2 (polling)
Thread 4 (vpp_wk_3):
node dpdk-input:
eth1 queue 3 (polling)
*Thread 5 (vpp_wk_4): <=== not sure why this thread got two queues.
node dpdk-input: eth0 queue 0 (polling) eth0 queue 2 (polling)*
Thread 6 (vpp_wk_5):
node dpdk-input:
eth0 queue 3 (polling)
Thread 7 (vpp_wk_6):
node dpdk-input:
eth0 queue 1 (polling)
vpp# set interface p
promiscuous proxy-arp
vpp# set interface rx
rx-mode rx-placement
vpp# set interface rx-placement eth0 queue 2 thread 8
set interface rx-placement: parse error: 'thread 8'
vpp# set interface rx-placement eth0 queue 2 thread 7
set interface rx-placement: parse error: 'thread 7'
vpp# set interface rx-placement eth0 queue 2 ?
set interface rx-placement: parse error: '?'
vpp# set interface rx-placement ?
set interface rx-placement set interface rx-placement
<interface> [queue <n>] [worker <n> | main]
vpp# set interface rx-placement eth0 queue 2 worker 7
vpp# sh interface rx-placement
Thread 1 (vpp_wk_0):
node dpdk-input:
eth1 queue 0 (polling)
Thread 2 (vpp_wk_1):
node dpdk-input:
eth1 queue 1 (polling)
Thread 3 (vpp_wk_2):
node dpdk-input:
eth1 queue 2 (polling)
Thread 4 (vpp_wk_3):
node dpdk-input:
eth1 queue 3 (polling)
Thread 5 (vpp_wk_4):
node dpdk-input:
eth0 queue 0 (polling)
Thread 6 (vpp_wk_5):
node dpdk-input:
eth0 queue 3 (polling)
Thread 7 (vpp_wk_6):
node dpdk-input:
eth0 queue 1 (polling)
Thread 8 (vpp_wk_7):
node dpdk-input:
eth0 queue 2 (polling)
On Mon, Nov 4, 2019 at 10:13 AM Damjan Marion <[email protected]> wrote:
> i remember doing corelist-workers with > 50 cores…..
>
> If you paste traceback we may have better clue what is wrong…
>
> —
> Damjan
>
>
>
> On 4 Nov 2019, at 18:51, Chuan Han via Lists.Fd.Io <
> [email protected]> wrote:
>
> All even number cores are on numa 0, which also hosts all nics.
>
> It seems corelist-workers can only take maximum 8 cores.
>
> On Mon, Nov 4, 2019 at 9:45 AM Tkachuk, Georgii <[email protected]>
> wrote:
>
>> Hi Chuan, are cores 20 and 22 on socket0 or socket1? If they are on
>> socket1, the application is crashing because the aesni_mb driver is
>> pointing to socket0: vdev crypto_aesni_mb0,socket_id=0.
>>
>>
>>
>> George
>>
>>
>>
>> *From:* [email protected] <[email protected]> *On Behalf Of *Chuan
>> Han via Lists.Fd.Io <http://lists.fd.io/>
>> *Sent:* Monday, November 04, 2019 10:27 AM
>> *To:* vpp-dev <[email protected]>
>> *Cc:* [email protected]
>> *Subject:* [vpp-dev] Is there a limit when assigning corelist-workers in
>> vpp?
>>
>>
>>
>> Hi, vpp experts,
>>
>>
>>
>> I am trying to allocate more cores to a phy nic. I want to allocate cores
>> 4,6,8,10 to eth0, and cores 12,14,16,18 to eth1.
>>
>>
>>
>> cpu {
>> main-core 2
>> * # corelist-workers 4,6,8,10,12,14,16,18,20,22 <== This does not
>> work. vpp crashes when starting. *
>> corelist-workers 4,6,8,10,12,14,16,18
>> }
>>
>> dpdk {
>> socket-mem 2048,0
>> log-level debug
>> no-tx-checksum-offload
>> dev default{
>> num-tx-desc 512
>> num-rx-desc 512
>> }
>> dev 0000:1a:00.0 {
>> # workers 4,6,8,10,12
>> workers 4,6,8,10
>> name eth0
>> }
>> dev 0000:19:00.1 {
>> # workers 14,16,18,20,22
>> workers 12,14,16,18
>> name eth1
>> }
>> # Use aesni mb lib.
>> vdev crypto_aesni_mb0,socket_id=0
>> # Use qat VF pcie addresses.
>> # dev 0000:3d:01.0
>> no-multi-seg
>> }
>>
>>
>>
>> Afte vpp starts, I can see eth1 got 4 cores but eth0 only got 3 cores.
>>
>>
>>
>> vpp# sh thread
>> ID Name Type LWP Sched Policy (Priority)
>> lcore Core Socket State
>> 0 vpp_main 10653 other (0) 2
>> 0 0
>> 1 vpp_wk_0 workers 10655 other (0) 4
>> 1 0
>> 2 vpp_wk_1 workers 10656 other (0) 6
>> 4 0
>> 3 vpp_wk_2 workers 10657 other (0) 8
>> 2 0
>> 4 vpp_wk_3 workers 10658 other (0)
>> 10 3 0
>> 5 vpp_wk_4 workers 10659 other (0)
>> 12 8 0
>> 6 vpp_wk_5 workers 10660 other (0)
>> 14 13 0
>> 7 vpp_wk_6 workers 10661 other (0)
>> 16 9 0
>> *8 vpp_wk_7 workers 10662 other (0)
>> 18 12 0 <=== core 18 is not used by eth0.*
>> vpp# sh interface rx-placement
>> Thread 1 (vpp_wk_0):
>> node dpdk-input:
>> eth1 queue 0 (polling)
>> Thread 2 (vpp_wk_1):
>> node dpdk-input:
>> eth1 queue 1 (polling)
>> Thread 3 (vpp_wk_2):
>> node dpdk-input:
>> eth1 queue 2 (polling)
>> Thread 4 (vpp_wk_3):
>> node dpdk-input:
>> eth1 queue 3 (polling)
>>
>>
>>
>> *Thread 5 (vpp_wk_4): node dpdk-input: eth0 queue 0 (polling)
>> eth0 queue 2 (polling)*
>> Thread 6 (vpp_wk_5):
>> node dpdk-input:
>> eth0 queue 3 (polling)
>> Thread 7 (vpp_wk_6):
>> node dpdk-input:
>> eth0 queue 1 (polling)
>> vpp#
>>
>>
>>
>> It seems there is a limitation on assigning cores to nic.
>>
>> 1. I cannot allocate cores after 20 to corelist-workers.
>>
>> 2. Cores after 18 cannot be allocated to nic.
>>
>>
>>
>> Is this some bug? Or, some undocumented limitation?
>>
>>
>>
>> Thanks.
>>
>> Chuan
>>
> -=-=-=-=-=-=-=-=-=-=-=-
> Links: You receive all messages sent to this group.
>
> View/Reply Online (#14491): https://lists.fd.io/g/vpp-dev/message/14491
> Mute This Topic: https://lists.fd.io/mt/41334952/675642
> Group Owner: [email protected]
> Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [[email protected]]
> -=-=-=-=-=-=-=-=-=-=-=-
>
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#14493): https://lists.fd.io/g/vpp-dev/message/14493
Mute This Topic: https://lists.fd.io/mt/41334952/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-