> On 11 Aug 2016, at 18:36, Adrian Chadd <adrian.ch...@gmail.com> wrote:
> 
> Hi!
> 
> mlx4_core0: <mlx4_core> mem
> 0xfbe00000-0xfbefffff,0xfb000000-0xfb7fffff irq 64 at device 0.0
> numa-domain 1 on pci16
> mlx4_core: Initializing mlx4_core: Mellanox ConnectX VPI driver v2.1.6
> (Aug 11 2016)
> 
> so the NIC is in numa-domain 1. Try pinning the worker threads to
> numa-domain 1 when you run the test:
> 
> numactl -l first-touch-rr -m 1 -c 1 ./test-program
> 
> You can also try pinning the NIC threads to numa-domain 1 versus 0 (so
> the second set of CPUs, not the first set.)
> 
> vmstat -ia | grep mlx (get the list of interrupt thread ids)
> then for each:
> 
> cpuset -d 1 -x <irq id>
> 
> Run pcm-memory.x each time so we can see the before and after effects
> on local versus remote memory access.
> 
> Thanks!

Waiting for the correct commands to use, I made some tests with :

  cpuset -l 0-11 <iperf_command>
or
  cpuset -l 12-23 <iperf_command>

and :

  c=0
  vmstat -ia | grep mlx | sed 's/^irq\(.*\):.*/\1/' | while read i
  do
    cpuset -l $c -x $i ; ((c++)) ; [[ $c -gt 11 ]] && c=0
  done
or 
  c=12
  vmstat -ia | grep mlx | sed 's/^irq\(.*\):.*/\1/' | while read i
  do
    cpuset -l $c -x $i ; ((c++)) ; [[ $c -gt 23 ]] && c=12
  done

Results :

No pinning
http://pastebin.com/raw/CrK1CQpm

Pinning workers to 0-11
Pinning NIC IRQ to 0-11
http://pastebin.com/raw/kLEQ6TKL

Pinning workers to 12-23
Pinning NIC IRQ to 12-23
http://pastebin.com/raw/qGxw9KL2

Pinning workers to 12-23
Pinning NIC IRQ to 0-11
http://pastebin.com/raw/tFjii629

Comments :

Strangely, the best iPer throughput results are when there is no pinning.
Whereas before running kernel with your new options, the best results were with 
everything pinned to 0-11.

Feel free to ask me further testing.

Ben

_______________________________________________
freebsd-net@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to