On 22.4.2021. 0:26, Alexander Bluhm wrote: > On Wed, Apr 21, 2021 at 11:28:17PM +0200, Hrvoje Popovski wrote: >> with this diff i'm getting panic when i'm pushing traffic over that box. > > Thanks for testing. > >> I'm sending traffic from host connected on ix0 from address 10.10.0.1 to >> host connected to ix1 to addresses 10.11.0.1 - 10.11.255.255 at cca 10Mpps > > I don't see the panic, but for you it is easily reproducable. I > use only 1 destination address, but you have 65000. Maybe is is a > routing or ARP issue. > >> x3550m4# panic: pool_cache_item_magic_check: mbufpl cpu free list >> modified: item addr 0xfffffd8066bbd6 > > This is a use after free bug with the mbuf. Either our pool is not > MP safe or mbuf handling anywhere in the driver or network stack > is buggy. > > As a wild guess, you could apply this diff on top. Something similar > has fixed IPv6 NDP problem I have seen. Maybe it is in the routing > table, that is used for ARP and NDP. > > bluhm >
With this diff i can't reproduce panic and here are the numbers NAT_TASKQ 1 = 650Kpps NET_TASKQ 4 = 1Mpps NAT_TASKQ 12 = 720Kpps :) PID TID PRI NICE SIZE RES STATE WAIT TIME CPU COMMAND 94375 474911 64 0 0K 1260K onproc/7 - 2:27 99.02% softnet 65830 241852 64 0 0K 1260K onproc/1 - 2:26 99.02% softnet 74140 395507 64 0 0K 1260K onproc/8 - 2:26 99.02% softnet 44640 313279 64 0 0K 1260K onproc/2 - 2:26 99.02% softnet 42633 112756 64 0 0K 1260K onproc/5 - 2:26 99.02% softnet 39742 473606 64 0 0K 1260K onproc/11 - 2:26 99.02% softnet 77284 459909 64 0 0K 1260K onproc/3 - 2:26 99.02% softnet 6070 206045 64 0 0K 1260K onproc/10 - 2:26 99.02% softnet 31495 401200 64 0 0K 1260K onproc/4 - 2:26 99.02% softnet 99034 582427 64 0 0K 1260K onproc/9 - 2:26 99.02% softnet 61432 149664 10 0 0K 1260K sleep/0 bored 0:17 14.26% softnet
