Hi I’m trying to investigate an HPL Linpack scaling issue on a single node, increasing from 1 to 4 cores.
Regarding single node messages, I think I understand that Open-MPI will select the most efficient mechanism, which in this case I think should be vader shared memory. But when I run Linpack, ipcs -m gives… ------ Shared Memory Segments -------- key shmid owner perms bytes nattch status And, ipcs -u gives… ------ Messages Status -------- allocated queues = 0 used headers = 0 used space = 0 bytes ------ Shared Memory Status -------- segments allocated 0 pages allocated 0 pages resident 0 pages swapped 0 Swap performance: 0 attempts 0 successes ------ Semaphore Status -------- used arrays = 0 allocated semaphores = 0 Am I looking in the wrong place to see how/if vader is using shared memory? I’m wondering if a slower mechanism is being used. My ompi_info includes... MCA btl: openib (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: tcp (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: vader (MCA v2.1.0, API v3.1.0, Component v4.0.3) MCA btl: self (MCA v2.1.0, API v3.1.0, Component v4.0.3) Best wishes