I'am developing mpi4py, a MPI port for Python. I've wrote many
unittest scripts for my wrappers, which also pretend to test MPI
implementations.

Below, I list some issues I've found when building my wrappers with
Open MPI 1.1.1. Please let me know your opinions.

- MPI_Group_translate_ranks(group1, n, ranks1, group2, ranks2) fails
  (with MPI_ERR_GROUP) if n != size(group1). Regarding the standard,
  I understand this routine should whork for any value of n, if
  ranks1 contains values (even if some are duplicated) in a valid
  range according to size(group1).

- MPI_Info_get_nthkey(INFO, 0, key) does not fail when INFO is
  empty, ie, when MPI_Info_get_nkeys(info, &nkeys) returns nkeys==0.

- Usage of MPI_IN_PLACE is broken in some collectives, below the
  reasons I've found:

  + MPI_Gather:    with sendbuf=MPI_IN_PLACE, sendcount is not ignored.
  + MPI_Scatter:   with recvbuf=MPI_IN_PLACE, recvcount is not ignored.
  + MPI_Allgather: with sendbuf=MPI_IN_PLACE, sendcount is not ignored.

  The standard says that [send|recv]count and [send|recv]type are
  ignored. I've not tested vector variants, perhaps they suffer the
  same problem.

- Some extended collective communications failed (not by raising
  errors, but instead aborting tracing to stdout) when using
  intercommunicators. Sometimes, the problems appeared when
  size(local_group) != size(remote_group). However, MPI_Barrier and
  MPI_Bcast worked well. I still could not get the reasons for those
  failures. I've found a similar problem in MPICH2 when configured
  with error-cheking enabled (they had a bug in some error-cheking
  macros, I reported this issue and next they told me I was right).


--
Lisandro Dalcín
---------------
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594

Reply via email to