[OMPI users] building openmpi-4.0.2 with gfortran

2019-12-13 Thread Tom Rosmond via users
supply to resolve this problem? Sincerely, Tom Rosmond

Re: [OMPI users] more migrating to MPI_F08

2017-03-23 Thread Tom Rosmond
/ worked kinda by accident. Recall that F08 is very, very strongly typed (even more so than C++). Meaning: being picky about 1D-and-a-specific-legnth is a *feature*! (yeah, it's kind of a PITA, but it really does help prevent bugs) On Mar 23, 2017, at 1:06 PM, Tom Rosmond <r

[OMPI users] more migrating to MPI_F08

2017-03-23 Thread Tom Rosmond
Hello, Attached is a simple MPI program demonstrating a problem I have encountered with 'MPI_Type_create_hindexed' when compiling with the 'mpi_f08' module. There are 2 blocks of code that are only different in how the length and displacement arrays are declared. I get indx.f90(50):

Re: [OMPI users] migrating to the MPI_F08 module

2017-03-22 Thread Tom Rosmond
Gilles, Yes, I found that definition about 5 minutes after I posted the question. Thanks for the response. Tom On 03/22/2017 03:47 PM, Gilles Gouaillardet wrote: Tom, what if you use type(mpi_datatype) :: mpiint Cheers, Gilles On Thursday, March 23, 2017, Tom Rosmond <r

[OMPI users] migrating to the MPI_F08 module

2017-03-22 Thread Tom Rosmond
Hello; I am converting some fortran 90/95 programs from the 'mpif.h' include file to the 'mpi_f08' model and have encountered a problem. Here is a simple test program that demonstrates it: __- program testf08 !

Re: [OMPI users] Fortran and MPI-3 shared memory

2016-10-28 Thread Tom Rosmond
and with Open MPI master /* i replaced shar_mem with fptr_mem */ Cheers, Gilles On 10/26/2016 3:29 AM, Tom Rosmond wrote: All: I am trying to understand the use of the shared memory features of MPI-3 that allow direct sharing of the memory space of on-node processes. Attached are 2 small test

Re: [OMPI users] Fortran and MPI-3 shared memory

2016-10-25 Thread Tom Rosmond
suspicion is that this is a program correctness issue. I can't point to any error, but I've ruled out the obvious alternatives. Jeff On Tue, Oct 25, 2016 at 11:29 AM, Tom Rosmond <rosm...@reachone.com <mailto:rosm...@reachone.com>> wrote: All: I am trying to unders

[OMPI users] Fortran and MPI-3 shared memory

2016-10-25 Thread Tom Rosmond
All: I am trying to understand the use of the shared memory features of MPI-3 that allow direct sharing of the memory space of on-node processes. Attached are 2 small test programs, one written in C (testmpi3.c), the other F95 (testmpi3.f90) . They are solving the identical 'halo' exchange

Re: [OMPI users] Porting MPI-3 C-program to Fortran

2016-04-22 Thread Tom Rosmond
Thanks for replying, but the difference between what can be done in C vs fortran is still my problem. I apologize for my rudimentary understanding of C, but here is a brief summary: In my originally attached C-program 'testmpi3.c' we have: int **shar_pntr : declare pointer variable (a

[OMPI users] Porting MPI-3 C-program to Fortran

2016-04-18 Thread Tom Rosmond
Hello, I am trying to port a simple halo exchange program from C to fortran. It is designed to demonstrate the shared memory features of MPI-3. The original C program was download from an Intel site, and I have modified it to simplify the port. A tarfile of a directory with each program

Re: [OMPI users] system call failed that shouldn't?

2016-04-14 Thread Tom Rosmond
Gilles, Yes, that solved the problem. Thanks for the help. I assume this fix will be in the next official release, i.e. 1.10.3? Tom Rosmond On 04/13/2016 05:07 PM, Gilles Gouaillardet wrote: Tom, i was able to reproduce the issue with an older v1.10 version, but not with current v1.10

[OMPI users] system call failed that shouldn't?

2016-04-13 Thread Tom Rosmond
Hello, In this thread from the Open-MPI archives: https://www.open-mpi.org/community/lists/devel/2014/03/14416.php a strange problem with a system call is discussed, and claimed to be solved. However, in running a simple test program with some new MPI-3 functions, the problem seems to be

[OMPI users] What about MPI-3 shared memory features?

2016-04-11 Thread Tom Rosmond
Hello, I have been looking into the MPI-3 extensions that added ways to do direct memory copying on multi-core 'nodes' that share memory. Architectures constructed from these nodes are universal now, so improved ways to exploit them are certainly needed. However, it is my understanding that

[OMPI users] Problem with multi-dimensional index arrays

2015-10-07 Thread Tom Rosmond
Hello, The benefits of 'using' the MPI module over 'including' MPIF.H are clear because of the sanity checks it performs, and I recently did some testing with the module that seems to uncover a possible bug or design flaw in OpenMPI's handling of arrays in user-defined data types. Attached

Re: [OMPI users] Segmentation fault with MPI_Type_indexed

2015-03-05 Thread Tom Rosmond
Actually, you are not the first to encounter the problem with 'MPI_Type_indexed' for very large datatypes. I also run with a 1.6 release, and solved the problem by switching to 'MPI_Type_Create_Hindexed' for the datatype. The critical difference is that the displacements for 'MPI_type_indexed'

Re: [OMPI users] send and receive vectors + variable length

2015-01-08 Thread Tom Rosmond
With array bounds checking your program returns an out-of-bounds error in the mpi_isend call at line 104. Looks like 'send_request' should be indexed with 'sendcount', not 'icount'. T. Rosmond On Thu, 2015-01-08 at 20:28 +0100, Diego Avesani wrote: > the attachment > > Diego > > > > On 8

Re: [OMPI users] MPIIO and derived data types

2014-07-21 Thread Tom Rosmond
0/2014 04:23 PM, Tom Rosmond wrote: > > Hello, > > > > For several years I have successfully used MPIIO in a Fortran global > > atmospheric ensemble data assimilation system. However, I always > > wondered if I was fully exploiting the power of MPIIO, specific

[OMPI users] MPIIO and derived data types

2014-07-20 Thread Tom Rosmond
Hello, For several years I have successfully used MPIIO in a Fortran global atmospheric ensemble data assimilation system. However, I always wondered if I was fully exploiting the power of MPIIO, specifically by using derived data types to better describe memory and file data layouts. All of my

Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-07 Thread Tom Rosmond
What Fortran compiler is your OpenMPI build with? Some fortran's don't understand MPI_IN_PLACE. Do a 'fortran MPI_IN_PLACE' search to see several instances. T. Rosmond On Sat, 2013-09-07 at 10:16 -0400, Hugo Gagnon wrote: > Nope, no luck. My environment is: > > OpenMPI 1.6.5 > gcc 4.8.1 >

Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-07 Thread Tom Rosmond
Just as an experiment, try replacing use mpi with include 'mpif.h' If that fixes the problem, you can confront the OpenMPI experts T. Rosmond On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote: > Thanks for the input but it still doesn't work for me... Here's the > version without

Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-07 Thread Tom Rosmond
I'm afraid I can't answer that. Here's my environment: OpenMPI 1.6.1 IFORT 12.0.3.174 Scientific Linux 6.4 What fortran compiler are you using? T. Rosmond On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote: > Thanks for the input but it still doesn't work for me... Here's the >

Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-06 Thread Tom Rosmond
Hello, Your syntax defining 'a' is not correct. This code works correctly. program test use mpi integer :: ierr, myrank, a(2) = 0 call MPI_Init(ierr) call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr) if (myrank == 0) then a(1) = 1 a(2) = 2 else a(1) = 3 a(2) = 4 endif call

Re: [OMPI users] MPIIO max record size

2013-05-22 Thread Tom Rosmond
ca> wrote: > > > On 05/22/2013 11:33 AM, Tom Rosmond wrote: > >> Thanks for the confirmation of the MPIIO problem. Interestingly, we > >> have the same problem when using MPIIO in INTEL MPI. So something > >> fundamental seems to be wrong. > >> >

[OMPI users] MPIIO max record size

2013-05-21 Thread Tom Rosmond
Hello: A colleague and I are running an atmospheric ensemble data assimilation system using MPIIO. We find that if for an individual MPI_FILE_READ_AT_ALL the block of data read exceeds 2**31 elements, the program fails. Our application is 32 bit fortran (Intel), so we certainly can see why this

[OMPI users] openmpi 1.6.1 and libnuma

2012-08-30 Thread Tom Rosmond
I just built Openmpi 1.6.1 with the '--with-libnuma=(dir)' and got a 'WARNING: unrecognized options' message. I am running on a NUMA architecture and have needed this feature with earlier Openmpi releases. Is the support now native in the 1.6 versions? If not, what should I do? T. Rosmond

Re: [OMPI users] Optimal 3-D Cartesian processor mapping

2012-04-24 Thread Tom Rosmond
Will do. My machine is currently quite busy, so it will be a while before I get answers. Stay tuned. T. Rosmond On Tue, 2012-04-24 at 13:36 -0600, Ralph Castain wrote: > Add --display-map to your mpirun cmd line > > On Apr 24, 2012, at 1:33 PM, Tom Rosmond wrote: > > > Jef

Re: [OMPI users] Optimal 3-D Cartesian processor mapping

2012-04-24 Thread Tom Rosmond
. Is there an environmental variable or an MCA option I can add to my 'mpirun' command line that would give that to me? I am running 1.5.4. T. Rosmond On Tue, 2012-04-24 at 15:11 -0400, Jeffrey Squyres wrote: > On Apr 24, 2012, at 3:01 PM, Tom Rosmond wrote: > > >

[OMPI users] Optimal 3-D Cartesian processor mapping

2012-04-24 Thread Tom Rosmond
We have a large ensemble-based atmospheric data assimilation system that does a 3-D cartesian partitioning of the 'domain' using MPI_DIMS_CREATE, MPI_CART_CREATE, etc. Two of the dimensions are spacial, i.e. latitude and longitude; the third is an 'ensemble' dimension, across which subsets of

Re: [OMPI users] IO performance

2012-02-06 Thread Tom Rosmond
1AM -0800, Tom Rosmond wrote: > > With all of this, here is my MPI related question. I recently added an > > option to use MPI-IO to do the heavy IO lifting in our applications. I > > would like to know what the relative importance of the dedicated MPI > > network

[OMPI users] IO performance

2012-02-03 Thread Tom Rosmond
Recently the organization I work for bought a modest sized Linux cluster for running large atmospheric data assimilation systems. In my experience a glaring problem with systems of this kind is poor IO performance. Typically they have 2 types of network: 1) A high speed, low latency, e.g.

Re: [OMPI users] Program hangs in mpi_bcast

2011-11-30 Thread Tom Rosmond
ago. See orte/test/mpi/bcast_loop.c > > > > > > On Nov 29, 2011, at 9:35 AM, Jeff Squyres wrote: > > > >> That's quite weird/surprising that you would need to set it down to *5* -- > >> that's really low. > >> > >> Can you share a sim

Re: [OMPI users] Program hangs in mpi_bcast

2011-11-15 Thread Tom Rosmond
- inserting a barrier before or after doesn't seem to make a > lot of difference, but most people use "before". Try different values until > you get something that works for you. > > > On Nov 14, 2011, at 3:10 PM, Tom Rosmond wrote: > > > Hello: > >

[OMPI users] Program hangs in mpi_bcast

2011-11-14 Thread Tom Rosmond
Hello: A colleague and I have been running a large F90 application that does an enormous number of mpi_bcast calls during execution. I deny any responsibility for the design of the code and why it needs these calls, but it is what we have inherited and have to work with. Recently we ported the

Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-29 Thread Tom Rosmond
On Mon, 2011-08-29 at 14:22 -0500, Rob Latham wrote: > On Mon, Aug 22, 2011 at 08:38:52AM -0700, Tom Rosmond wrote: > > Yes, we are using collective I/O (mpi_file_write_at_all, > > mpi_file_read_at_all). The swaping of fortran and mpi-io are just > > branches in the code a

Re: [OMPI users] MPIIO and EXT3 file systems

2011-08-22 Thread Tom Rosmond
On Mon, 2011-08-22 at 10:23 -0500, Rob Latham wrote: > On Thu, Aug 18, 2011 at 08:46:46AM -0700, Tom Rosmond wrote: > > We have a large fortran application designed to run doing IO with either > > mpi_io or fortran direct access. On a linux workstation (16 AMD cores) > >

[OMPI users] MPIIO and EXT3 file systems

2011-08-18 Thread Tom Rosmond
We have a large fortran application designed to run doing IO with either mpi_io or fortran direct access. On a linux workstation (16 AMD cores) running openmpi 1.5.3 and Intel fortran 12.0 we are having trouble with random failures with the mpi_io option which do not occur with conventional

Re: [OMPI users] Trouble with MPI-IO

2011-05-24 Thread Tom Rosmond
Rob, Thanks for the clarification. I had seen that point about non-decreasing offsets in the standard and it was just beginning to dawn on me that maybe it was my problem. I will rethink my mapping strategy to comply with the restriction. Thanks again. T. Rosmond On Tue, 2011-05-24 at 10:09

Re: [OMPI users] Trouble with MPI-IO

2011-05-20 Thread Tom Rosmond
ow the allocatable stuff fits in... > (I'm not enough of a Fortran programmer to know) > Anyone else out that who can comment T. Rosmond > > On May 10, 2011, at 7:14 PM, Tom Rosmond wrote: > > > I would appreciate someone with experience with MPI-IO look at the

[OMPI users] Trouble with MPI-IO

2011-05-10 Thread Tom Rosmond
I would appreciate someone with experience with MPI-IO look at the simple fortran program gzipped and attached to this note. It is imbedded in a script so that all that is necessary to run it is do: 'testio' from the command line. The program generates a small 2-D input array, sets up an MPI-IO

Re: [OMPI users] questions about MPI-IO

2011-01-06 Thread Tom Rosmond
the job. T. Rosmond On Thu, 2011-01-06 at 14:52 -0600, Rob Latham wrote: > On Tue, Dec 21, 2010 at 06:38:59PM -0800, Tom Rosmond wrote: > > I use the function MPI_FILE_SET_VIEW with the 'native' > > data representation and correctly write a file with MPI_FILE_WRITE_ALL. > >

[OMPI users] questions about MPI-IO

2010-12-21 Thread Tom Rosmond
I have been experimenting with some simple fortran test programs to write files with some of the MPI-IO functions, and have come across a troubling issue. I use the function MPI_FILE_SET_VIEW with the 'native' data representation and correctly write a file with MPI_FILE_WRITE_ALL. However, if I

[OMPI users] MPI-IO problem

2010-12-15 Thread Tom Rosmond
I want to implement an MPI-IO solution for some of the IO in a large atmospheric data assimilation system. Years ago I got some small demonstration Fortran programs ( I think from Bill Gropp) that seem to be good candidate prototypes for what I need. Two of them are attached as part of simple

Re: [OMPI users] send and receive buffer the same on root

2010-09-16 Thread Tom Rosmond
compiler specific I think. I've done this with OpenMPI no > > problem, however on one another cluster with ifort I've gotten error > > messages about not using MPI_IN_PLACE. So I think if it compiles, > > it should work fine. > > > > On Thu, Sep 16, 2010 at 10:0

[OMPI users] send and receive buffer the same on root

2010-09-16 Thread Tom Rosmond
I am working with a Fortran 90 code with many MPI calls like this: call mpi_gatherv(x,nsize(rank+1), mpi_real,x,nsize,nstep,mpi_real,root,mpi_comm_world,mstat) 'x' is allocated on root to be large enough to hold the results of the gather, other arrays and parameters are defined correctly,

Re: [OMPI users] An error occured in MPI_Bcast; MPI_ERR_TYPE: invalid datatype

2010-05-21 Thread Tom Rosmond
Your fortran call to 'mpi_bcast' needs a status parameter at the end of the argument list. Also, I don't think 'MPI_INT' is correct for fortran, it should be 'MPI_INTEGER'. With these changes the program works OK. T. Rosmond On Fri, 2010-05-21 at 11:40 +0200, Pankatz, Klaus wrote: > Hi folks,

Re: [OMPI users] running multiple executables under Torque/PBS PRO

2009-11-10 Thread Tom Rosmond
orte_hosts" (assuming you installed the man pages) > and see what it says. > > Ralph > > On Nov 10, 2009, at 2:46 PM, Tom Rosmond wrote: > > > I want to run a number of MPI executables simultaneously in a PBS job. > > For example on my system I do 'cat $PBS_NODEFI

[OMPI users] running multiple executables under Torque/PBS PRO

2009-11-10 Thread Tom Rosmond
I want to run a number of MPI executables simultaneously in a PBS job. For example on my system I do 'cat $PBS_NODEFILE' and get a list like this: n04 n04 n04 n04 n06 n06 n06 n06 n07 n07 n07 n07 n09 n09 n09 n09 i.e, 16 processors on 4 nodes. from which I can parse into file(s) as desired. If I

Re: [OMPI users] Programming Help needed

2009-11-06 Thread Tom Rosmond
AMJAD On your first question, the answer is probably, if everything else is done correctly. The first test is to not try to do the overlapping communication and computation, but do them sequentially and make sure the answers are correct. Have you done this test? Debugging your original approach

[OMPI users] all2all algorithms

2009-04-12 Thread Tom Rosmond
I am curious about the algorithm(s) used in the OpenMPI implementations of the all2all and all2allv. As many of you know, there are alternate algorithms for all2all type operations, such as that of Plimpton, et al (2006), that basically exchange latency costs for bandwidth costs, which pays big

Re: [OMPI users] Choices on how to implement a python module in MPI.

2007-02-04 Thread Tom Rosmond
Have you looked at the self-scheduling algorithm described in "USING MPI" by Gropp, Lusk, and Skjellum. I have seen efficient implementations of it for large satellite data assimilation problems in numerical weather prediction, where load distribution across processors cannot be predicted in

[OMPI users] Testing 1-sided MPI again

2006-08-15 Thread Tom Rosmond
I am continuing to test the MPI-2 features of 1.1, and have run into some puzzling behavior. I wrote a simple F90 program to test 'mpi_put' and 'mpi_get' on a coordinate transformation problem on a two dual-core processor Opteron workstation running the PGI 6.1 compiler. The program runs

Re: [OMPI users] Wont run with 1.0.2

2006-05-25 Thread Tom Rosmond
sudo make uninstall cd ../openmpi1.0.2 sudo make install I have had no trouble in the past with PGF90 version 6.1-3 and OpenMPI 1.1a on a dual Operton 1.4 GHz machine running Debian Linux. Michael On May 24, 2006, at 7:43 PM, Tom Rosmond wrote: After using OPENMPI Ver 1.0.1 for several mo

[OMPI users] Wont run with 1.0.2

2006-05-24 Thread Tom Rosmond
After using OPENMPI Ver 1.0.1 for several months without trouble, last week I decided to upgrade to Ver 1.0.2. My primary motivation was curiosity, to see if there was any performance benefit. To my surprise, several of my F90 applications refused to run with the newer version. I also tried

Re: [OMPI users] Myrinet on linux cluster

2006-03-10 Thread Tom Rosmond
the installation to the compute nodes, but I hope that will be routine. Thanks for the help Brian Barrett wrote: On Mar 10, 2006, at 8:35 AM, Brian Barrett wrote: On Mar 9, 2006, at 11:37 PM, Tom Rosmond wrote: Attached are output files from a build with the adjustments you suggested

Re: [OMPI users] Myrinet on linux cluster

2006-03-09 Thread Tom Rosmond
Troy Telford wrote: The configure seemed to go OK, but the make failed. As you see at the end of the make output, it doesn't like the format of libgm.so. It looks to me that it is using a path (/usr/lib/.) to 32 bit libraries, rather than 64 bit (/usr/lib64/). Is this correct?

[OMPI users] Myrinet on linux cluster

2006-03-09 Thread Tom Rosmond
a path (/usr/lib/.) to 32 bit libraries, rather than 64 bit (/usr/lib64/). Is this correct? What's the solution? Tom Rosmond config.log.bz2 Description: BZip2 compressed data config_out.bz2 Description: BZip2 compressed data make_out.bz2 Description: BZip2 compressed data

Re: [O-MPI users] LAM vs OPENMPI performance

2006-01-04 Thread Tom Rosmond
4, 2006, at 4:24 PM, Tom Rosmond wrote: I have been using LAM-MPI for many years on PC/Linux systems and have been quite pleased with its performance. However, at the urging of the LAM-MPI website, I have decided to switch to OPENMPI. For much of my preliminary testing I work on a single