Ah... I see. That explains everything!

Thanks for catching this.

It will be more helpful to let vpp print more meaningful logs or have
better documentation.

On Thu, Nov 14, 2019 at 7:17 AM Benoit Ganne (bganne) <[email protected]>
wrote:

> Hi Chuan,
>
> I took a deeper look at your conf and actually realized that the dpdk
> 'workers' stanza does not refer to core number but to worker id.
> So, when you say "cpu { corelist-workers 4,6,8,10,12,14,16,18,20,22 }" you
> define 10 workers with id from 0 to 9 and pin them to specific cores (4,6
> etc.).
> Then, with "dpdk { ... dev ... { workers 4,6,8,10,12 } ..." you refer to
> the *workers id*, numbered from 0 to 9, not to cores.
> So the correct dpdk stanzas should be "workers 0,1,2,3,4" and "workers
> 5,6,7,8,9" instead (note: you can also use "workers 0-4" and "workers 5-9").
> What is confusing is the dpdk will happily parse inexistent workers id,
> and then when assigning queues to workers, it will respect the conf for
> known ones (4,6,8) and do round-robin assignment for the unknown ones
> ignoring already set manual assignment.
>
> Best
> Ben
>
> > -----Original Message-----
> > From: Chuan Han <[email protected]>
> > Sent: lundi 11 novembre 2019 23:23
> > To: Benoit Ganne (bganne) <[email protected]>
> > Cc: Dave Barach (dbarach) <[email protected]>; vpp-dev <vpp-
> > [email protected]>
> > Subject: Re: [vpp-dev] Is there a limit when assigning corelist-workers
> in
> > vpp?
> >
> > If I do not manually assign cores to nic, and let vpp assign cores, vpp
> > only assign one core per nic queue. If I also assign number of queues to
> > 10, all cores are used. Otherwise, only core 4 and 6 are assigned to each
> > nic, which has only one queue.
> >
> > Probably, there is some smart logic adaptively assigning cores to nics?
> > Anyway, a single core per nic is good enough to us for now.
> >
> > Without specifying number of rx queues per nic, only core 4 and 6 are
> > assigned.
> >
> > vpp# sh threads
> > ID     Name                Type        LWP     Sched Policy (Priority)
> > lcore  Core   Socket State
> > 0      vpp_main                        16886   other (0)                2
> > 0      0
> > 1      vpp_wk_0            workers     16888   other (0)                4
> > 1      0
> > 2      vpp_wk_1            workers     16889   other (0)                6
> > 4      0
> > 3      vpp_wk_2            workers     16890   other (0)                8
> > 2      0
> > 4      vpp_wk_3            workers     16891   other (0)
> 10
> > 3      0
> > 5      vpp_wk_4            workers     16892   other (0)
> 12
> > 8      0
> > 6      vpp_wk_5            workers     16893   other (0)
> 14
> > 13     0
> > 7      vpp_wk_6            workers     16894   other (0)
> 16
> > 9      0
> > 8      vpp_wk_7            workers     16895   other (0)
> 18
> > 12     0
> > 9      vpp_wk_8            workers     16896   other (0)
> 20
> > 10     0
> > 10     vpp_wk_9            workers     16897   other (0)
> 22
> > 11     0
> > vpp# sh interface rx-placement
> > Thread 1 (vpp_wk_0):
> >   node dpdk-input:
> >     eth1 queue 0 (polling)
> > Thread 2 (vpp_wk_1):
> >   node dpdk-input:
> >     eth0 queue 0 (polling)
> > vpp#
> >
> >
> > cpu {
> >   main-core 2
> >   corelist-workers 4,6,8,10,12,14,16,18,20,22
> > }
> >
> > dpdk {
> >   socket-mem 2048,0
> >   log-level debug
> >   no-tx-checksum-offload
> >   dev default{
> >     num-tx-desc 512
> >     num-rx-desc 512
> >   }
> >   dev 0000:1a:00.0 {
> >     name eth0
> >   }
> >   dev 0000:19:00.1 {
> >     name eth1
> >   }
> >   # Use aesni mb lib.
> >   vdev crypto_aesni_mb0,socket_id=0
> >   no-multi-seg
> > }
> >
> >
> > When specifying number of queues per nic, all cores are used.
> >
> > vpp# sh int rx-placement
> > Thread 1 (vpp_wk_0):
> >   node dpdk-input:
> >     eth1 queue 0 (polling)
> >     eth0 queue 0 (polling)
> > Thread 2 (vpp_wk_1):
> >   node dpdk-input:
> >     eth1 queue 1 (polling)
> >     eth0 queue 1 (polling)
> > Thread 3 (vpp_wk_2):
> >   node dpdk-input:
> >     eth1 queue 2 (polling)
> >     eth0 queue 2 (polling)
> > Thread 4 (vpp_wk_3):
> >   node dpdk-input:
> >     eth1 queue 3 (polling)
> >     eth0 queue 3 (polling)
> > Thread 5 (vpp_wk_4):
> >   node dpdk-input:
> >     eth1 queue 4 (polling)
> >     eth0 queue 4 (polling)
> > Thread 6 (vpp_wk_5):
> >   node dpdk-input:
> >     eth1 queue 5 (polling)
> >     eth0 queue 5 (polling)
> > Thread 7 (vpp_wk_6):
> >   node dpdk-input:
> >     eth1 queue 6 (polling)
> >     eth0 queue 6 (polling)
> > Thread 8 (vpp_wk_7):
> >   node dpdk-input:
> >     eth1 queue 7 (polling)
> >     eth0 queue 7 (polling)
> > Thread 9 (vpp_wk_8):
> >   node dpdk-input:
> >     eth1 queue 8 (polling)
> >     eth0 queue 8 (polling)
> > Thread 10 (vpp_wk_9):
> >   node dpdk-input:
> >     eth1 queue 9 (polling)
> >     eth0 queue 9 (polling)
> > vpp#  sh threads
> > ID     Name                Type        LWP     Sched Policy (Priority)
> > lcore  Core   Socket State
> > 0      vpp_main                        16905   other (0)                2
> > 0      0
> > 1      vpp_wk_0            workers     16907   other (0)                4
> > 1      0
> > 2      vpp_wk_1            workers     16908   other (0)                6
> > 4      0
> > 3      vpp_wk_2            workers     16909   other (0)                8
> > 2      0
> > 4      vpp_wk_3            workers     16910   other (0)
> 10
> > 3      0
> > 5      vpp_wk_4            workers     16911   other (0)
> 12
> > 8      0
> > 6      vpp_wk_5            workers     16912   other (0)
> 14
> > 13     0
> > 7      vpp_wk_6            workers     16913   other (0)
> 16
> > 9      0
> > 8      vpp_wk_7            workers     16914   other (0)
> 18
> > 12     0
> > 9      vpp_wk_8            workers     16915   other (0)
> 20
> > 10     0
> > 10     vpp_wk_9            workers     16916   other (0)
> 22
> > 11     0
> > vpp#
> >
> >
> > cpu {
> >   main-core 2
> >   corelist-workers 4,6,8,10,12,14,16,18,20,22
> > }
> >
> > dpdk {
> >   socket-mem 2048,0
> >   log-level debug
> >   no-tx-checksum-offload
> >   dev default{
> >     num-tx-desc 512
> >     num-rx-desc 512
> >     num-rx-queues 10
> >   }
> >   dev 0000:1a:00.0 {
> >     name eth0
> >   }
> >   dev 0000:19:00.1 {
> >     name eth1
> >   }
> >   # Use aesni mb lib.
> >   vdev crypto_aesni_mb0,socket_id=0
> >   no-multi-seg
> > }
> >
> >
> > On Fri, Nov 8, 2019 at 12:54 AM Benoit Ganne (bganne) via Lists.Fd.Io
> > <http://Lists.Fd.Io>  <[email protected]
> > <mailto:[email protected]> > wrote:
> >
> >
> >       Hi Chuan,
> >
> >       > The weird thing is that when I reduced the number of workers
> >       > everything worked fine. I did send 8.5Gbps udp/tcp traffic over
> > the two
> >       > machines. I also saw encryption/decryption happening. How could
> > this be
> >       > possible without crypto engine?
> >
> >       Hmm that should not have happened :)
> >       Frankly I have no idea, but maybe there was other difference in
> > configuration?
> >
> >       > I installed vpp-plugin-core. No more crash was seen.
> >       > However, when I reserved 10 cores for 2 nics. Some cores are not
> > polling
> >       > shown by htop.
> >       [...]
> >       > Any clue why? Or, there is probably more misconfig here? Or bug?
> >
> >       The config looks sane so it might be a bug in the way we parse dpdk
> > workers conf. If instead of manually assigning cores to NIC you let VPP
> > assigns them is it better?, eg.
> >
> >       dpdk {
> >         socket-mem 2048,0
> >         log-level debug
> >         no-tx-checksum-offload
> >         dev default{
> >           num-tx-desc 512
> >           num-rx-desc 512
> >           num-rx-queues 10
> >         }
> >         dev 0000:1a:00.0 {
> >           name eth0
> >         }
> >         dev 0000:19:00.1 {
> >           name eth1
> >         }
> >       }
> >
> >       Thx
> >       ben
> >       -=-=-=-=-=-=-=-=-=-=-=-
> >       Links: You receive all messages sent to this group.
> >
> >       View/Reply Online (#14542): https://lists.fd.io/g/vpp-
> > dev/message/14542
> >       Mute This Topic: https://lists.fd.io/mt/41334952/1991531
> >       Group Owner: [email protected] <mailto:vpp-
> > dev%[email protected]>
> >       Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
> > [[email protected] <mailto:[email protected]> ]
> >       -=-=-=-=-=-=-=-=-=-=-=-
> >
>
>
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#14601): https://lists.fd.io/g/vpp-dev/message/14601
Mute This Topic: https://lists.fd.io/mt/41334952/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to