Re: [OMPI users] MPIIO max record size

2013-05-22 Thread Eric Chamberland

On 05/22/2013 12:37 PM, Ralph Castain wrote:

Well, ROMIO was written by Argonne/MPICH (unfair to point the finger solely at 
Rob) and picked up by pretty much everyone. The issue isn't a bug in MPIIO, but 
rather


Ok, sorry about that!

thanks for the historical and technical informations!

Eric



Re: [OMPI users] MPIIO max record size

2013-05-22 Thread Tom Rosmond
I was afraid that was the case.  Too bad, because applications (and the
files they use), are getting much too big for the 32 bit limit.

T. Rosmond


On Wed, 2013-05-22 at 09:37 -0700, Ralph Castain wrote:
> On May 22, 2013, at 9:23 AM, Eric Chamberland 
>  wrote:
> 
> > On 05/22/2013 11:33 AM, Tom Rosmond wrote:
> >> Thanks for the confirmation of the MPIIO problem.  Interestingly, we
> >> have the same problem when using MPIIO in INTEL MPI.  So something
> >> fundamental seems to be wrong.
> >> 
> > 
> > I think but I am not sure that it is because the MPI I/O (ROMIO) code is 
> > the same for all distributions...
> > 
> > It has been written by Rob Latham.
> > 
> > Maybe some developers could confirm this?
> 
> Well, ROMIO was written by Argonne/MPICH (unfair to point the finger solely 
> at Rob) and picked up by pretty much everyone. The issue isn't a bug in 
> MPIIO, but rather in the MPI functional descriptions. They stipulate that the 
> input param be an int, which defaults to 32-bits on the described system. So 
> there is no way to reference anything beyond 32-bits in size.
> 
> Afraid you'll have to do the multiple reads, or switch to a system that 
> defaults to 64-bit integers.
> 
> > 
> > Eric
> > 
> >> T. Rosmond
> >> 
> >> 
> >> On Wed, 2013-05-22 at 11:21 -0400, Eric Chamberland wrote:
> >>> I have experienced the same problem.. and worst, I have discovered a bug
> >>> in MPI I/O...
> >>> 
> >>> look here:
> >>> http://trac.mpich.org/projects/mpich/ticket/1742
> >>> 
> >>> and here:
> >>> 
> >>> http://www.open-mpi.org/community/lists/users/2012/10/20511.php
> >>> 
> >>> Eric
> >>> 
> >>> On 05/21/2013 03:18 PM, Tom Rosmond wrote:
>  Hello:
>  
>  A colleague and I are running an atmospheric ensemble data assimilation
>  system using MPIIO.  We find that if for an individual
>  MPI_FILE_READ_AT_ALL the block of data read exceeds 2**31 elements, the
>  program fails.  Our application is 32 bit fortran (Intel), so we
>  certainly can see why this might be expected.  Is this the case?  We
>  have a workaround by doing multiple reads from the file while moving the
>  file view, so it isn't a serious problem.
>  
>  Thanks for any advice or suggestions
>  
>  T. Rosmond
>  
>  
>  
>  ___
>  users mailing list
>  us...@open-mpi.org
>  http://www.open-mpi.org/mailman/listinfo.cgi/users
>  
> >>> 
> > 
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] MPIIO max record size

2013-05-22 Thread Ralph Castain

On May 22, 2013, at 9:23 AM, Eric Chamberland 
 wrote:

> On 05/22/2013 11:33 AM, Tom Rosmond wrote:
>> Thanks for the confirmation of the MPIIO problem.  Interestingly, we
>> have the same problem when using MPIIO in INTEL MPI.  So something
>> fundamental seems to be wrong.
>> 
> 
> I think but I am not sure that it is because the MPI I/O (ROMIO) code is the 
> same for all distributions...
> 
> It has been written by Rob Latham.
> 
> Maybe some developers could confirm this?

Well, ROMIO was written by Argonne/MPICH (unfair to point the finger solely at 
Rob) and picked up by pretty much everyone. The issue isn't a bug in MPIIO, but 
rather in the MPI functional descriptions. They stipulate that the input param 
be an int, which defaults to 32-bits on the described system. So there is no 
way to reference anything beyond 32-bits in size.

Afraid you'll have to do the multiple reads, or switch to a system that 
defaults to 64-bit integers.

> 
> Eric
> 
>> T. Rosmond
>> 
>> 
>> On Wed, 2013-05-22 at 11:21 -0400, Eric Chamberland wrote:
>>> I have experienced the same problem.. and worst, I have discovered a bug
>>> in MPI I/O...
>>> 
>>> look here:
>>> http://trac.mpich.org/projects/mpich/ticket/1742
>>> 
>>> and here:
>>> 
>>> http://www.open-mpi.org/community/lists/users/2012/10/20511.php
>>> 
>>> Eric
>>> 
>>> On 05/21/2013 03:18 PM, Tom Rosmond wrote:
 Hello:
 
 A colleague and I are running an atmospheric ensemble data assimilation
 system using MPIIO.  We find that if for an individual
 MPI_FILE_READ_AT_ALL the block of data read exceeds 2**31 elements, the
 program fails.  Our application is 32 bit fortran (Intel), so we
 certainly can see why this might be expected.  Is this the case?  We
 have a workaround by doing multiple reads from the file while moving the
 file view, so it isn't a serious problem.
 
 Thanks for any advice or suggestions
 
 T. Rosmond
 
 
 
 ___
 users mailing list
 us...@open-mpi.org
 http://www.open-mpi.org/mailman/listinfo.cgi/users
 
>>> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Compatibility between OS, OpenMPI, and OFED

2013-05-22 Thread BRADLEY, PETER C PW
 

We’re seeing some abnormal performance behavior when running an OpenMPI 1.4.4 
application on RH6.4 using Mellanox OFED 1.5.3.  Under certain circumstances, 
system CPU starts dominating and performance tails off severely.  This behavior 
does not happen with the same job run with TCP.  Is there a resource that shows 
what’s compatible with what?  For example, does OpenMPI need to be rebuilt when 
an OFED changes, etc?  



Re: [OMPI users] basic questions about compiling OpenMPI

2013-05-22 Thread Tim Prince

On 5/22/2013 11:34 AM, Paul Kapinos wrote:

On 05/22/13 17:08, Blosch, Edwin L wrote:

Apologies for not exploring the FAQ first.


No comments =)



If I want to use Intel or PGI compilers but link against the OpenMPI 
that ships with RedHat Enterprise Linux 6 (compiled with g++ I 
presume), are there any issues to watch out for, during linking?


At least, the Fortran-90 bindings ("use mpi") won't work at all 
(they're compiler-dependent.


So, our way is to compile a version of Open MPI with each compiler. I 
think this is recommended.


Note also that the version of Open MPI shipped with Linux is usuallu a 
bit dusty.



The gfortran build of Fortran library, as well as the .mod USE files, 
won't work with ifort or PGI compilers.  g++ built libraries ought to 
work with sufficiently recent versions of icpc.
As noted above, it's worth while to rebuild yourself, even if you use a 
(preferably more up to date version of) gcc, which you can use along 
with one of the commercial Fortran compilers for linux.


--
Tim Prince



Re: [OMPI users] MPIIO max record size

2013-05-22 Thread Eric Chamberland

On 05/22/2013 11:33 AM, Tom Rosmond wrote:

Thanks for the confirmation of the MPIIO problem.  Interestingly, we
have the same problem when using MPIIO in INTEL MPI.  So something
fundamental seems to be wrong.



I think but I am not sure that it is because the MPI I/O (ROMIO) code is 
the same for all distributions...


It has been written by Rob Latham.

Maybe some developers could confirm this?

Eric


T. Rosmond


On Wed, 2013-05-22 at 11:21 -0400, Eric Chamberland wrote:

I have experienced the same problem.. and worst, I have discovered a bug
in MPI I/O...

look here:
http://trac.mpich.org/projects/mpich/ticket/1742

and here:

http://www.open-mpi.org/community/lists/users/2012/10/20511.php

Eric

On 05/21/2013 03:18 PM, Tom Rosmond wrote:

Hello:

A colleague and I are running an atmospheric ensemble data assimilation
system using MPIIO.  We find that if for an individual
MPI_FILE_READ_AT_ALL the block of data read exceeds 2**31 elements, the
program fails.  Our application is 32 bit fortran (Intel), so we
certainly can see why this might be expected.  Is this the case?  We
have a workaround by doing multiple reads from the file while moving the
file view, so it isn't a serious problem.

Thanks for any advice or suggestions

T. Rosmond



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users







Re: [OMPI users] basic questions about compiling OpenMPI

2013-05-22 Thread Paul Kapinos

On 05/22/13 17:08, Blosch, Edwin L wrote:

Apologies for not exploring the FAQ first.


No comments =)




If I want to use Intel or PGI compilers but link against the OpenMPI that ships 
with RedHat Enterprise Linux 6 (compiled with g++ I presume), are there any 
issues to watch out for, during linking?


At least, the Fortran-90 bindings ("use mpi") won't work at all (they're 
compiler-dependent.


So, our way is to compile a version of Open MPI with each compiler. I think this 
is recommended.


Note also that the version of Open MPI shipped with Linux is usuallu a bit 
dusty.




--
Dipl.-Inform. Paul Kapinos   -   High Performance Computing,
RWTH Aachen University, Center for Computing and Communication
Seffenter Weg 23,  D 52074  Aachen (Germany)
Tel: +49 241/80-24915



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [OMPI users] basic questions about compiling OpenMPI

2013-05-22 Thread Nathan Hjelm
If you are only using the C API there will be no issues. There are no 
guarantees with C++ or fortran.

-Nathan Hjelm
HPC-3, LANL

On Wed, May 22, 2013 at 03:08:31PM +, Blosch, Edwin L wrote:
> Apologies for not exploring the FAQ first.
> 
> 
> 
> If I want to use Intel or PGI compilers but link against the OpenMPI that 
> ships with RedHat Enterprise Linux 6 (compiled with g++ I presume), are there 
> any issues to watch out for, during linking?
> 
> 
> 
> Thanks,
> 
> 
> 
> Ed
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] MPIIO max record size

2013-05-22 Thread Eric Chamberland
I have experienced the same problem.. and worst, I have discovered a bug 
in MPI I/O...


look here:
http://trac.mpich.org/projects/mpich/ticket/1742

and here:

http://www.open-mpi.org/community/lists/users/2012/10/20511.php

Eric

On 05/21/2013 03:18 PM, Tom Rosmond wrote:

Hello:

A colleague and I are running an atmospheric ensemble data assimilation
system using MPIIO.  We find that if for an individual
MPI_FILE_READ_AT_ALL the block of data read exceeds 2**31 elements, the
program fails.  Our application is 32 bit fortran (Intel), so we
certainly can see why this might be expected.  Is this the case?  We
have a workaround by doing multiple reads from the file while moving the
file view, so it isn't a serious problem.

Thanks for any advice or suggestions

T. Rosmond



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users





[OMPI users] basic questions about compiling OpenMPI

2013-05-22 Thread Blosch, Edwin L
Apologies for not exploring the FAQ first.



If I want to use Intel or PGI compilers but link against the OpenMPI that ships 
with RedHat Enterprise Linux 6 (compiled with g++ I presume), are there any 
issues to watch out for, during linking?



Thanks,



Ed