>From your output:
140000000 default file=/memfd:seg_0-0\040(deleted) huge dirty=1 N0=1
kernelpagesize_kB=1048576 ### 1 page
1000000000 default file=/memfd:buffers-numa-0\040(deleted) huge dirty=5 N0=5
kernelpagesize_kB=1048576 ### 5 pages
1140000000 default file=/memfd:buffers-numa-1\040(deleted) huge dirty=1 N1=1
kernelpagesize_kB=1048576 ### 1 page
7eefc0000000 default file=/memfd:seg_2-0\040(deleted) huge dirty=1 N1=1
kernelpagesize_kB=1048576 ### 1 page
7f0040000000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N0=1
kernelpagesize_kB=1048576 ### 1 page
Total - 9 pages of 1Gb size are allocated.
But your CPU 1Gb dtlb cache is only 4 entries: "data TLB: 1G pages, 4-way, 4
entries".
I think this maps exactly to bad dtlb cache utilization.
So it seems that you need to reduce usage of 1Gb hugepages due to limitation of
your processor. Intel CPUs up to very recent generations have this limitation I
think.
Generally good solution would be to use 1Gb page only for buffers and make
sure that buffers do not consume more then 4 pages (4 gigs) of memory.
To do this you can try setting "page-size 1G" in buffers {} section of the
config. And change "default-hugepage-size" to "2M" in "memory" section of the
config.
Also, reduce "buffers 2097152" to smaller value, since only buffers alone take
6 pages (5 pages on numa 0 and 1 page on numa 1).
Using this many buffers is huge overkill for most cases, but I not sure what is
your case and if you need that much buffers.
If you could fit all buffers in single 1G page it could help a lot with
performance. Something like "buffers 400000" could be enough for it.
In any case statseg memory could use 2M pages, since it is usually small and
cached well. And 1Gb pages are probably overkill for stats.
Also, as said previously, make sure that cpu affinity is matching NUMA nodes of
network card. AFAIR when you allocate CPUs only from single NUMA, then only
single numa should appear in "show buffers" and "numa_maps" file. This way you
know it will perform best.
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#26668): https://lists.fd.io/g/vpp-dev/message/26668
Mute This Topic: https://lists.fd.io/mt/116824309/21656
Group Owner: [email protected]
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/14379924/21656/631435203/xyzzy
[[email protected]]
-=-=-=-=-=-=-=-=-=-=-=-