Sorry, forgot to mention: 1.10.1

                Open MPI: 1.10.1
  Open MPI repo revision: v1.10.0-178-gb80f802
   Open MPI release date: Nov 03, 2015
                Open RTE: 1.10.1
  Open RTE repo revision: v1.10.0-178-gb80f802
   Open RTE release date: Nov 03, 2015
                    OPAL: 1.10.1
      OPAL repo revision: v1.10.0-178-gb80f802
       OPAL release date: Nov 03, 2015
                 MPI API: 3.0.0
            Ident string: 1.10.1


On 12/09/15 11:26, Gilles Gouaillardet wrote:
Paul,

which OpenMPI version are you using ?

thanks for providing a simple reproducer, that will make things much easier from
now.
(and at first glance, that might not be a very tricky bug)

Cheers,

Gilles

On Wednesday, December 9, 2015, Paul Kapinos <kapi...@itc.rwth-aachen.de
<mailto:kapi...@itc.rwth-aachen.de>> wrote:

    Dear Open MPI developers,
    did OMPIO (1) reached 'usable-stable' state?

    As we reported in (2) we had some trouble in building Open MPI with ROMIO,
    which fact was hidden by OMPIO implementation stepping into the MPI_IO
    breach. The fact 'ROMIO isn't AVBL' was detected after users complained
    'MPI_IO don't work as expected with version XYZ of OpenMPI' and further
    investigations.

    Take a look at the attached example. It deliver different result in case of
    using ROMIO and OMPIO even with 1 MPI rank on local hard disk, cf. (3).
    We've seen more examples of divergent behaviour but this one is quite handy.

    Is that a bug in OMPIO or did we miss something?

    Best
    Paul Kapinos


    1) http://www.open-mpi.org/faq/?category=ompio

    2) http://www.open-mpi.org/community/lists/devel/2015/12/18405.php

    3) (ROMIO is default; on local hard drive at node 'cluster')
    $ ompi_info  | grep  romio
                       MCA io: romio (MCA v2.0.0, API v2.0.0, Component v1.10.1)
    $ ompi_info  | grep  ompio
                       MCA io: ompio (MCA v2.0.0, API v2.0.0, Component v1.10.1)
    $ mpif90 main.f90

    $ echo hello1234 > out.txt; $MPIEXEC -np 1 -H cluster  ./a.out;
      fileOffset, fileSize                    10                    10
      fileOffset, fileSize                    26                    26
      ierr            0
      MPI_MODE_WRONLY,  MPI_MODE_APPEND            4         128

    $ export OMPI_MCA_io=ompio
    $ echo hello1234 > out.txt; $MPIEXEC -np 1 -H cluster  ./a.out;
      fileOffset, fileSize                     0                    10
      fileOffset, fileSize                     0                    16
      ierr            0
      MPI_MODE_WRONLY,  MPI_MODE_APPEND            4         128


    --
    Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
    RWTH Aachen University, IT Center
    Seffenter Weg 23,  D 52074  Aachen (Germany)
    Tel: +49 241/80-24915



_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/12/28145.php



--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, IT Center
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to