Hy,

If, by any bad luck, you use any of the following FORTRAN function 

MPI_FILE_GET_POSITION
MPI_FILE_GET_SIZE 
MPI_FILE_GET_VIEW
MPI_TYPE_EXTENT 

they all are stiil overflowing 
(http://www.open-mpi.org/community/lists/devel/2010/12/8797.php) because they 
cast the correct result to MPI_Fint which  default size if 32 bits.

Yves Secretan
yves.secre...@ete.inrs.ca

Avant d'imprimer, pensez à l'environnement 

-----Message d'origine-----
De : users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] De la part 
de Ricardo Reis
Envoyé : 15 mai 2012 04:29
À : Open MPI Users
Objet : Re: [OMPI users] MPI-IO puzzlement


  Hi all

  The problem has been found.

  I'm trying to use MPI-IO to write the file with all processes taking part in 
the calculation writing their bit. Here lies the rub.

  Each process has to write a piece of DIM = 35709696 elements.

  Using 64 processes the ofsett is my_rank * dim

  and so... the ofset size, for last processes becomes.

  DBG:   60 will WriteMPI_IO. dim     35709696 offset   2142581760
  DBG:   61 will WriteMPI_IO. dim     35709696 offset  -2116675840
  DBG:   62 will WriteMPI_IO. dim     35709696 offset  -2080966144
  DBG:   63 will WriteMPI_IO. dim     35709696 offset  -2045256448


  offset is of the type MPI_OFFSET_KIND which seems insuficient to hold the 
correct size for the offset.


  So... am I condemned to write my own MPI data type so I can write the 
files? ideas... ?

  best regards,


  Ricardo Reis

  'Non Serviam'

  PhD/MSc Mechanical Engineering | Lic. Aerospace Engineering

  Computational Fluid Dynamics, High Performance Computing, Turbulence
  http://www.lasef.ist.utl.pt

  Cultural Instigator @ Rádio Zero
  http://www.radiozero.pt

  http://www.flickr.com/photos/rreis/

  contacts:  gtalk: kyriu...@gmail.com  skype: kyriusan

  Institutional Address:

  Ricardo J.N. dos Reis
  IDMEC, Instituto Superior Técnico, Technical University of Lisbon
  Av. Rovisco Pais
  1049-001 Lisboa
  Portugal

                       - email sent with alpine 2.00 -

Reply via email to