On 30/08/2018 09:39, Drew Parsons wrote:
On 2018-08-29 12:15, Alastair McKinstry wrote:
On 28/08/2018 22:20, Drew Parsons wrote:
On 2018-08-03 22:46, Dima Kogan wrote:
2. Is the MPI implementation significant? Would mpich behave
potentially
differently here from openmpi?
For what it's worth, 2 separate upstreams (PETSc and FEniCS) both
hold a dim view of openmpi, perceiving it full of bugs, which has
certainly been the case so far with openmpi3. They recommend mpich.
We've already discussed transitioning mpi-defaults over from openmpi
to mpich in the past. Now that openmpi3 is more or less settled in
testing, is it time to open that discussion again?
I'm in favour of moving default to mpich. OpenMPI is now at 3.1.2 and
I plan to ship 3.1.x in Buster.
Its also worth testing MPICH 3.3b3. Currently mpich is at 3.3b2 and
3.3b3 is in experimental for the last few months. I've left off
updating mpich until openmpi is stable (nearly tere I hope)
It's a comedy of errors with openmpi3, I see 3.1.2 has triggered new
RC bugs !
If you want a break from the openmpi angst then go ahead and drop
mpich 3.3b3 into unstable. It won't make the overall MPI situation
any worse... :)
Drew
Ok, I've pushed 3.3b3 to unstable.
For me there are two concerns:
(1) The current setup (openmpi default) shakes out issues in openmpi3
that should be fixed. It would be good to get that done.
(2) moving to mpich as default is a transition and should be pushed
before the deadline - say setting 30 Sept?
Does an MPI / mpich transition overlap with other transitions planned
for Buster - say hwloc, hdf5 ?
Alastair
--
Alastair McKinstry, <[email protected]>, <[email protected]>,
https://diaspora.sceal.ie/u/amckinstry
Misentropy: doubting that the Universe is becoming more disordered.