Re: [OMPI users] ORTE HNP Daemon Error - Generated by Tweaking MTU

2020-08-10 Thread John Duffy via users
Thanks Ralph I will do all of that. Much appreciated.

Re: [OMPI users] ORTE HNP Daemon Error - Generated by Tweaking MTU

2020-08-09 Thread John Duffy via users
Thanks Gilles I realise this is “off topic”. I was hoping the Open-MPI ORTE/HNP message might give me a clue where to look for my driver problem. Regarding P/Q ratios, indeed P=2 & Q=16 does indeed give me better performance. Kind regards

[OMPI users] ORTE HNP Daemon Error - Generated by Tweaking MTU

2020-08-09 Thread John Duffy via users
Hi I have generated this problem myself by tweaking the MTU of my 8 node Raspberry Pi 4 cluster to 9000 bytes, but I would be grateful for any ideas/suggestions on how to relate the Open-MPI ORTE message to my tweaking. When I run HPL Linpack using my “improved” cluster, it runs quite happily

[OMPI users] Correct mpirun Options for Hybrid OpenMPI/OpenMP

2020-08-05 Thread John Duffy via users
Ralph, John, Prentice Thank you for your replies. Indeed, —bind-to none or —bind-to socket solved my problem… mpirun —bind-to socket -host node1,node2 -x OMP_NUM_THREADS=4 -np 2 xhpl … happily runs 2 xhpl process, one on each node, with 4 cores fully utilised. The hints about

[OMPI users] Correct mpirun Options for Hybrid OpenMPI/OpenMP

2020-08-03 Thread John Duffy via users
Hi I’m experimenting with hybrid OpenMPI/OpenMP Linpack benchmarks on my small cluster, and I’m a bit confused as to how to invoke mpirun. I have compiled/linked HPL-2.3 with OpenMPI and libopenblas-openmp using the GCC -fopenmp option on Ubuntu 20.04 64-bit. With P=1 and Q=1 in HPL.dat, if I

Re: [OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread John Duffy via users
Hi Joseph, JohnThank you for your replies.I’m using Ubuntu 20.04 aarch64 on a 8 x Raspberry Pi 4 cluster.The symptoms I’m experiencing are that the HPL Linpack performance in Gflops increases on a single core as NB is increased from 32 to 256. The theoretical maximum is 6 Gflops per core. I can

[OMPI users] Vader - Where to Look for Shared Memory Use

2020-07-22 Thread John Duffy via users
Hi I’m trying to investigate an HPL Linpack scaling issue on a single node, increasing from 1 to 4 cores. Regarding single node messages, I think I understand that Open-MPI will select the most efficient mechanism, which in this case I think should be vader shared memory. But when I run

[OMPI users] choosing network: infiniband vs. ethernet

2020-07-17 Thread John Duffy via users
Hi Lana I’m a Open MPI newbie too, but I managed to build Open MPI 4.0.4 quite easily on Ubuntu 20.04 just following the instructions in README/INSTALL in the top level source directory, namely: mkdir build cd build ../configure CFLAGS=“-O3” # My CFLAGS make all sudo make all sudo make

Re: [OMPI users] MTU Size and Open-MPI/HPL Benchmark

2020-07-15 Thread John Duffy via users
Thank you Jeff and Gilles. Kind regards John

[OMPI users] MTU Size and Open-MPI/HPL Benchmark

2020-07-15 Thread John Duffy via users
Hi Ubuntu 20.04 aarch64, Open-MPI 4.0.3, HPL 2.3 Having changed the MTU across my small cluster from 1500 to 9000, I’m wondering how/if Open-MPI can take advantage of this increased maximum packet size. ip link show eth0 2: eth0: mtu 9000 qdisc mq state UP mode DEFAULT group default qlen