Hi, for another application we found that 4-8 mpi ranks per node was necessary in order to saturate network bandwidth. Since the application was network bandwidth limited, this was key to performance. Joel
Sent from my Samsung Galaxy S8 -------- Original message --------From: James Healy <[email protected]> Date: 1/20/18 10:21 AM (GMT-05:00) To: Einstein Toolkit Users <[email protected]>, Yosef Zlochower <[email protected]>, Carlos Lousto <[email protected]> Subject: [Users] Using Stampede2 SKX Hello all, I am trying to run on the new skylake processors on Stampede2 and while the run speeds we are obtaining are very good, we are concerned that we aren't optimizing properly when it comes to OpenMP. For instance, we see the best speeds when we use 8 MPI processors per node (with 6 threads each for a total of 48 total threads/node). Based on the architecture, we were expecting to see the best speeds with 2 MPI/node. Here is what I have tried: Using the simfactory files for stampede2-skx (config file, run and submit scripts, and modules loaded) I compiled a version of ET_2017_06 using LazEv (RIT's evolution thorn) and McLachlan and submitted a series of runs that change both the number of nodes used, and how I distribute the 48 threads/node between MPI processes. I use a standard low resolution grid, with no IO or regridding. Parameter file attached. Run speeds are measured from Carpet::physical_time_per_hour at iteration 256. I tried both with and without hwloc/SystemTopology. For both McLachlan and LazEv, I see similar results, with 2 MPI/node giving the worst results (see attached plot for McLachlan) and a slight preferences for 8 MPI/node. So my questions are: Has there been any tests run by any other users on stampede2 skx? Should we expect 2 MPI/node to be the optimal choice? If so, are there any other configurations we can try that could help optimize? Thanks in advance! Jim Healy
_______________________________________________ Users mailing list [email protected] http://lists.einsteintoolkit.org/mailman/listinfo/users
