Re: [OMPI users] mpirun error

2013-04-01 Thread Michael Kluskens
The Intel Fortran 2013 compiler comes with support for Intel's MPI runtime and you are getting that instead of OpenMPI. You need to fix your path for all the shells you use. On Apr 1, 2013, at 5:12 AM, Pradeep Jha wrote: > /opt/intel/composer_xe_2013.1.117/mpirt/bin/intel64/mpirun: line 96:

[OMPI users] mpivars.sh - Intel Fortran 13.1 conflict with OpenMPI 1.6.3

2013-01-24 Thread Michael Kluskens
This is for reference and suggestions as this took me several hours to track down and the previous discussion on "mpivars.sh" failed to cover this point (nothing in the FAQ): I successfully build and installed OpenMPI 1.6.3 using the following on Debian Linux: ./configure

Re: [OMPI users] tickets 39 & 55

2006-11-06 Thread Michael Kluskens
On Nov 2, 2006, at 7:47 PM, Jeff Squyres wrote: On Nov 2, 2006, at 3:18 PM, Michael Kluskens wrote: So "large" was an attempt to provide *some* of the interfaces -- but [your] experience has shown that this can do more harm than good (i.e., make some legal MPI applications un

Re: [OMPI users] OMPI Collectives

2006-11-01 Thread Michael Kluskens
On Nov 1, 2006, at 10:27 AM, George Bosilca wrote: PS: BTW which version of Open MPI are you using ? The one who deliver the best performance or the collective communications (at least on high performance networks) is the nightly release of he 1.2 branch. As far as I can see the only nightly

[OMPI users] tickets 39 & 55

2006-10-31 Thread Michael Kluskens
OpenMPI tickets 39 & 55 deal with problems with the Fortran 90 large interface with regards to: #39: MPI_IN_PLACE in MPI_REDUCE #55: MPI_GATHER with arrays of different dimensions Attached is a

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-30 Thread Michael Kluskens
on the trunk. On Oct 16, 2006, at 8:29 AM, Åke Sandgren wrote: On Mon, 2006-10-16 at 10:13 +0200, Åke Sandgren wrote: On Fri, 2006-10-06 at 00:04 -0400, Jeff Squyres wrote: On 10/5/06 2:42 PM, "Michael Kluskens" <mk...@ieee.org> wrote: System: BLACS 1.1p3 on Debian Linux 3.1r3

[OMPI users] MPI_REDUCE vs. MPI_IN_PLACE vs. F90 Interfaces

2006-10-25 Thread Michael Kluskens
Yet another forgotten issue regarding the f90 large interfaces (note that MPI_IN_PLACE is currently an integer, for a time it was a double complex but that has been fixed). Problem I have now is that my patches which worked with 1.2 don't work

Re: [OMPI users] Starting on remote nodes

2006-10-25 Thread Michael Kluskens
On Oct 25, 2006, at 11:43 AM, Katherine Holcomb wrote: ...We support multiple compilers (specifically PGI and Intel) and due to incompatibilities in different vendors' f90 .mod files, we have separate directories for OpenMPI with each compiler. Therefore we cannot set a global path to the

[OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimensions

2006-10-24 Thread Michael Kluskens
This is a reminder about an issue I bought up back at the end of May 2006 and the solution was to disable with-mpi-f90-size=large till 1.2. Testing 1.3a1r12274 and I see that no progress has been made on this even though I submited the precise

Re: [OMPI users] BLACS Mac OS X

2006-10-12 Thread Michael Kluskens
On Oct 12, 2006, at 4:14 PM, Warner Yuen wrote: I've just built BLACS using the latest beta: openmpi-1.1.2rc4 as well as openmpi-1.1.1. 1.1.2rc4 should be fine; however, I don't think a new version named 1.1.1 was released and it should fail on some or all platforms. I am getting the

Re: [OMPI users] PBS problem with OpenMP- only one processor used

2006-10-12 Thread Michael Kluskens
On Oct 12, 2006, at 8:23 AM, amane001 wrote: Thanks for your reply. I actually meant OpenMPI Am 12.10.2006 um 09:52 schrieb amane001: > the code below. Even if I set the OMP_NUM_THREADS = 2, the print > setenv OMP_NUM_THREADS 2 These are OpenMP, not OpenMPI environmental variables.

Re: [OMPI users] Trouble with shared libraries

2006-10-12 Thread Michael Kluskens
On Oct 11, 2006, at 10:38 AM, Lisandro Dalcin wrote: On 10/11/06, Jeff Squyres wrote: Open MPI v1.1.1 requires that you set your LD_LIBRARY_PATH to include the directory where its libraries were installed (typically, $prefix/ lib). Or, you can use mpirun's --prefix

[OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-10 Thread Michael Kluskens
On Oct 5, 2006, at 4:41 PM, George Bosilca wrote: Once you run the performance tests please let me know the outcome. Ignoring the other issue I just posted here are timings for BLACS 1.1p3 Tester with OpenMPI & MPICH2 on two nodes of a dual-opteron system running Debian Linux 3.1r3,

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-05 Thread Michael Kluskens
On Oct 4, 2006, at 7:51 PM, George Bosilca wrote: This is the correct patch (same as previous minus the debugging statements). On Oct 4, 2006, at 7:42 PM, George Bosilca wrote: The problem was found and fixed. Until the patch get applied to the 1.1 and 1.2 branches please use the attached

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-04 Thread Michael Kluskens
On Oct 4, 2006, at 8:22 AM, Harald Forbert wrote: The TRANSCOMM setting that we are using here and that I think is the correct one is "-DUseMpi2" since OpenMPI implements the corresponding mpi2 calls. You need a recent version of BLACS for this setting to be available (1.1 with patch 3 should

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-03 Thread Michael Kluskens
info for v1.1, and created ticket 464 for the trunk (v1.3) issue. https://svn.open-mpi.org/trac/ompi/ticket/356 https://svn.open-mpi.org/trac/ompi/ticket/464 On 10/3/06 10:53 AM, "Michael Kluskens" <mk...@ieee.org> wrote: Summary: OpenMPI 1.1.1 and 1.3a1r11943 have different

Re: [OMPI users] BLACS vs. OpenMPI 1.1.1 & 1.3

2006-10-03 Thread Michael Kluskens
rors until it crashes on the Complex AMX test (which is after the Integer Sum test). System configuration: Debian 3.1r3 on dual opteron, gcc 3.3.5, Intel ifort 9.1.032. On Oct 3, 2006, at 2:44 AM, Åke Sandgren wrote: On Mon, 2006-10-02 at 18:39 -0400, Michael Kluskens wrote: OpenMPI, B

Re: [OMPI users] BLACS & OpenMPI

2006-10-02 Thread Michael Kluskens
Having trouble getting BLACS to pass tests. OpenMPI, BLACS, and blacstester built just fine. Tester reports errors for integer and real cases #1 and #51 and more for the other types.. is an open ticket related to this. Any word on the

[OMPI users] BLACS & OpenMPI

2006-10-02 Thread Michael Kluskens
Building BLACS 1.1 with patch 3 and OpenMPI 1.1.1 (using gcc and ifort) Configuring the Bmake.inc file, if I set: MPILIB = -lmpi I have no trouble building the install program xsyserrors. However, the more standard approach is to set: MPILIB = $(MPILIBdir)/libmpi.a which generates the

Re: [OMPI users] LSF with OpenMPI

2006-08-30 Thread Michael Kluskens
that they don't copy the environment over (others do). None of us have LSF, unfortunately, so we haven't done any work to try to make OMPI work on it. On 8/25/06 10:14 AM, "Michael Kluskens" <mk...@ieee.org> wrote: Is there anyone running OpenMPI on a machine with LSF batch queuei

Re: [OMPI users] Compiling MPI with pgf90

2006-07-31 Thread Michael Kluskens
On Jul 31, 2006, at 1:12 PM, James McManus wrote: I'm trying to compile MPI with pgf90. I use the following configure settings: ./configure --prefix=/usr/local/mpi F90=pgf90 F77=pgf77 Besides the other issue about the wrong env. variable, if you have further trouble I'm using the

Re: [OMPI users] Runtime Error

2006-07-26 Thread Michael Kluskens
ted my machine. Not sure if it's necessary or not. 8. Go back to the v1.1 directory. Type 'make clean', then reconfigure, then recompile and reinstall 9. Things should work now. Thank you Michael, ~Ben ++ Benjamin Landsteiner lands...@stolaf.edu On 2006/06/26, at 3:48 PM, M

[OMPI users] BTL devices

2006-07-14 Thread Michael Kluskens
On Jun 24, 2006, at 1:19 PM, George Bosilca wrote: As your cluster have several network devices that are supported by Open MPI it is possible that the configure script detected the correct path to their libraries. Therefore, they might be included/ compiled by default in Open MPI. The simplest

Re: [OMPI users] auto detect hosts

2006-07-14 Thread Michael Kluskens
On Jun 29, 2006, at 1:31 PM, Jeff Squyres (jsquyres) wrote: I'm running on a cluster of dual-opterons running Debian Linux. Just using "mpirun -np 4 hostname" somehow OpenMPI located the second dual-opteron in the stack of machines but no more than that, regardless of how many processes I

[OMPI users] auto detect hosts

2006-06-19 Thread Michael Kluskens
How does OpenMPI auto-detect available hosts? I'm running on a cluster of dual-opterons running Debian Linux. Just using "mpirun -np 4 hostname" somehow OpenMPI located the second dual-opteron in the stack of machines but no more than that, regardless of how many processes I asked for.

[OMPI users] MPI_Wtime

2006-06-19 Thread Michael Kluskens
Is anyone using MPI_Wtime with any version of OpenMPI under Fortran 90? I got my program to compile with MPI_Wtime commands but the difference between two different times in the process is always zero. When compiling against OpenMPI I have to specify mytime = MPI_Wtime For other MPI's I

Re: [OMPI users] F90 interfaces again

2006-06-12 Thread Michael Kluskens
On Jun 9, 2006, at 12:33 PM, Brian W. Barrett wrote: On Thu, 8 Jun 2006, Michael Kluskens wrote: call MPI_WAITALL(3,sp_request,MPI_STATUSES_IGNORE,ier) 1 Error: Generic subroutine 'mpi_waitall' at (1

[OMPI users] F90 interfaces again

2006-06-08 Thread Michael Kluskens
call MPI_WAITALL(3,sp_request,MPI_STATUSES_IGNORE,ier) 1 Error: Generic subroutine 'mpi_waitall' at (1) is not consistent with a specific subroutine interface Issue, 3rd argument of MPI_WAITALL expects an integer

Re: [OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimensions

2006-06-02 Thread Michael Kluskens
c/ompi/ticket/55 -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Tuesday, May 30, 2006 3:40 PM To: Open MPI Users Subject: [OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimensions Looking at limitations of the

[OMPI users] MPI_REDUCE vs. MPI_IN_PLACE vs. F90 Interfaces

2006-05-30 Thread Michael Kluskens
Found serious issue for the f90 interfaces for --with-mpi-f90- size=large Consider call MPI_REDUCE(MPI_IN_PLACE,sumpfi,sumpfmi,MPI_INTEGER,MPI_SUM, 0,allmpi,ier) Error: Generic subroutine 'mpi_reduce' at (1) is not consistent with a specific subroutine interface sumpfi is an

[OMPI users] MPI_GATHER: missing f90 interfaces for mixed dimensions

2006-05-30 Thread Michael Kluskens
Looking at limitations of the following: --with-mpi-f90-size=SIZE specify the types of functions in the Fortran 90 MPI module, where size is one of: trivial (MPI-2 F90-specific functions only), small (trivial +

Re: [OMPI users] Wont run with 1.0.2

2006-05-25 Thread Michael Kluskens
One possibility is that you didn't properly uninstall version 1.0.1 before installing version 1.0.2 & 1.0.3. There was a change with some of the libraries a while back that caused me a similar problem. An install of later versions of OpenMPI do not remove certain libraries from 1.0.1.

Re: [OMPI users] spawn failed with errno=-7

2006-05-25 Thread Michael Kluskens
I think I moved to OpenMPI 1.1 and 1.2 alphas because of problems with spawn and OpenMPI 1.0.1 & 1.0.2. You may wish to test building 1.1 and seeing if that solves your problem. Michael On May 24, 2006, at 1:48 PM, Jens Klostermann wrote: I did the following run with

Re: [OMPI users] Fortran support not installing

2006-05-24 Thread Michael Kluskens
On May 24, 2006, at 11:24 AM, Terry Reeves wrote: Hello, everyone. I have g95 fortran installed. I'm told it works. I'm doing this for some grad students, I am not myself a programmer or a unix expert but I know a bit more than the basics. This is a Mac OS X dual G5 processor xserve

[OMPI users] MPI_Intercomm_merge broken

2006-05-03 Thread Michael Kluskens
me know what you find. I just checked and the code *looks* right to me, but that doesn't mean that there isn't some deeper implication that I'm missing. -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Tuesday, May 02,

Re: [OMPI users] fortran flags using Absoft compilers

2006-05-02 Thread Michael Kluskens
On May 1, 2006, at 7:16 PM, Jeffrey Fox wrote: I get openmpi-1.0.2 to compile on a (small) G5 cluster. The C and C+ + compilers work fine so far, but the mpif77 and mpif90 scripts send the wrong flags to the f77 and f90 compilers. Side note I got the Absoft compilers to work using

[OMPI users] openmpi-1.0.2 configure problem

2006-05-01 Thread Michael Kluskens
checking if FORTRAN compiler supports integer(selected_int_kind (2))... yes checking size of FORTRAN integer(selected_int_kind(2))... unknown configure: WARNING: *** Problem running configure test! configure: WARNING: *** See config.log for details. configure: error: *** Cannot continue. Source

Re: [OMPI users] MPI_Intercomm_Merge -- Fortran

2006-05-01 Thread Michael Kluskens
I've noticed that I can't just fix this myself, very bad things happened to the merged communicator, so this is not a trivial fix I gather. Michael On Apr 30, 2006, at 12:16 PM, Michael Kluskens wrote: MPI_Intercomm_Merge( intercomm, high, newintracomm, ier ) None of the books I have

Re: [OMPI users] missing mpi_allgather_f90.f90.sh inopenmpi-1.2a1r9704

2006-04-27 Thread Michael Kluskens
for a few days (making these fixes take a little while). -Original Message- I made another test and the problem does not occur with --with-mpi- f90-size=medium. Michael On Apr 26, 2006, at 11:50 AM, Michael Kluskens wrote: Open MPI 1.2a1r9704 Summary: configure with --with-mpi-f90

Re: [OMPI users] Spawn and Disconnect

2006-04-26 Thread Michael Kluskens
PM, Michael Kluskens wrote: I'm running OpenMPI 1.1 (v9704)and when a spawned processes exits the parent does not die (see previous discussions about 1.0.1/1.0.2); however, the next time the parent tries to spawn a process MPI_Comm_spawn does not return. My test output below: parent:

Re: [OMPI users] f90 module files compile a lot faster

2006-04-25 Thread Michael Kluskens
Minor suggestion, change the first sentence to read: - The Fortran 90 MPI bindings can now be built in one of four sizes using --with-mpi-f90-size=SIZE. Also, Open MPI 1.2 changes the --with-mpi-param-check default from always to runtime according to my comparison of the 1.1 README and

[OMPI users] Spawn and Disconnect

2006-04-25 Thread Michael Kluskens
I'm running OpenMPI 1.1 (v9704)and when a spawned processes exits the parent does not die (see previous discussions about 1.0.1/1.0.2); however, the next time the parent tries to spawn a process MPI_Comm_spawn does not return. My test output below: parent: 0 of 1 parent: How many

Re: [OMPI users] f90 module files compile a lot faster

2006-04-25 Thread Michael Kluskens
en-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Tuesday, April 25, 2006 9:56 AM To: Open MPI Users Subject: [OMPI users] f90 module files compile a lot faster Strange thing, with the latest g95 and the last OpenMPI 1.1 (a3r9704) [on OS X 10.4.6] there doe

[OMPI users] f90 module files compile a lot faster

2006-04-25 Thread Michael Kluskens
Strange thing, with the latest g95 and the last OpenMPI 1.1 (a3r9704) [on OS X 10.4.6] there does not seem to be the compilation penalty for using "USE MPI" instead of "include mpi.h" that there used to be. My test programs compile almost instantly. However, I'm still seeing:

Re: [OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-25 Thread Michael Kluskens
in shortly (even the .h.sh script is generated from a marked up version of mpi.h -- don't ask ;-) ). I also corrected type_get_attr and win_get_attr. Thanks! -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Michael Kluskens Sent: Thursday

[OMPI users] f90 interface error?: MPI_Comm_get_attr

2006-04-20 Thread Michael Kluskens
Error in: openmpi-1.1a3r9663/ompi/mpi/f90/mpi-f90-interfaces.h subroutine MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag, ierr) include 'mpif.h' integer, intent(in) :: comm integer, intent(in) :: comm_keyval integer(kind=MPI_ADDRESS_KIND), intent(out) :: attribute_val

Re: [OMPI users] ORTE errors

2006-04-11 Thread Michael Kluskens
[host:00258] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ oob_base_xcast.c at line 108 [host:00258] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ rmgr_base_stage_gate.c at line 276 child 0 of 1: Receiving 17 from parent Maximum user memory allocated: 0 Michael Michael Kluskens wrote

[OMPI users] ORTE errors

2006-04-10 Thread Michael Kluskens
The ORTE errors again, these are new and different errors. Tested as of OpenMPI 1.1a1r9596. [host:10198] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ soh_base_get_proc_soh.c at line 80 [host:10198] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ oob_base_xcast.c at line 108 [host:10198]

Re: [OMPI users] job running question

2006-04-10 Thread Michael Kluskens
You need to confirm that /etc/bashrc is actually being read in that environment, bash is a little different on which files get read depending on whether you login interactively or not. Also, I don't think ~/.bashrc is read on a noninteractive login. Michael On Apr 10, 2006, at 1:06 PM,

Re: [OMPI users] Open MPI installed locally

2006-04-03 Thread Michael Kluskens
On Apr 3, 2006, at 3:02 PM, Brian Barrett wrote: On Apr 3, 2006, at 2:50 PM, Rolf Vandevaart wrote: From what I have read from the Open MPI documentation, it seems that the recommendation is to install Open MPI on an NFS server that is accessible to all the nodes in the cell. Are there any

[OMPI users] XMPI ?

2006-03-29 Thread Michael Kluskens
XMPI is a GUI debugger that works with LAM/MPI. Is there anything similar that works with OpenMPI? Michael

Re: [OMPI users] Absoft fortran detected as g77?

2006-03-28 Thread Michael Kluskens
On Mar 28, 2006, at 1:22 PM, Brian Barrett wrote: On Mar 27, 2006, at 8:26 AM, Michael Kluskens wrote: On Mar 23, 2006, at 9:28 PM, Brian Barrett wrote: On Mar 23, 2006, at 5:32 PM, Michael Kluskens wrote: I have Absoft version 8.2a installed on my OS X 10.4.5 system and in order to do

Re: [OMPI users] Best MPI implementation

2006-03-27 Thread Michael Kluskens
On Mar 27, 2006, at 4:11 PM, Jeff Squyres (jsquyres) wrote: For your code, most MPI implementations (Open MPI, LAM/MPI, etc.) support the same API. So if it compiles/links with one, it *should* compile/link with the others (assuming you coded it in an MPI- conformant way). The MPI

Re: [OMPI users] Absoft fortran detected as g77?

2006-03-27 Thread Michael Kluskens
On Mar 23, 2006, at 9:28 PM, Brian Barrett wrote: On Mar 23, 2006, at 5:32 PM, Michael Kluskens wrote: I have Absoft version 8.2a installed on my OS X 10.4.5 system and in order to do some testing I was trying to build OpenMPI 1.1a1r9364 with it and got the following funny result

[OMPI users] Absoft fortran detected as g77?

2006-03-23 Thread Michael Kluskens
I have Absoft version 8.2a installed on my OS X 10.4.5 system and in order to do some testing I was trying to build OpenMPI 1.1a1r9364 with it and got the following funny result: *** Fortran 77 compiler checking whether we are using the GNU Fortran 77 compiler... yes checking whether f95

Re: [OMPI users] mpif90 broken in recent tarballs of 1.1a1

2006-03-21 Thread Michael Kluskens
On Mar 20, 2006, at 7:22 PM, Brian Barrett wrote: On Mar 20, 2006, at 6:10 PM, Michael Kluskens wrote: I have identified what I think is the issue described below. Even though the default prefix is /usr/local, r9336 only works for me if I use ./configure --prefix=/usr/local Thank you

[OMPI users] Sample code demonstrating issues with multiple versions of OpenMPI

2006-03-20 Thread Michael Kluskens
The sample code at the end of this message demonstrates issues with multiple versions of OpenMPI. OpenMPI 1.0.2a10 compiles the code but crashes because of the interface issues previously discussed. This is both using " USE MPI " and " include 'mpif.h' " OpenMPI 1.1a1r9336 generates

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-14 Thread Michael Kluskens
I see responses to noncritical parts of my discussion but not the following, is it a known issue, a fixed issue, or we don't want to discuss it issue? Michael On Mar 7, 2006, at 4:39 PM, Michael Kluskens wrote: The following errors/warnings also exist when running my spawn test on a clean

Re: [OMPI users] Using Multiple Gigabit Ethernet Interface

2006-03-13 Thread Michael Kluskens
On Mar 11, 2006, at 1:00 PM, Jayabrata Chakrabarty wrote: Hi I have been looking for information on how to use multiple Gigabit Ethernet Interface for MPI communication. So far what i have found out is i have to use mca_btl_tcp. But what i wish to know, is what IP Address to assign to each

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-07 Thread Michael Kluskens
On Mar 7, 2006, at 3:23 PM, Michael Kluskens wrote: Per the mpi_comm_spawn issues with the 1.0.x releases I started using 1.1r9212, with my sample code I'm getting a messages of [-:13327] mca: base: component_find: unable to open: dlopen(/usr/ local/lib/openmpi/mca_pml_teg.so, 9): Symbol

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-07 Thread Michael Kluskens
On Mar 1, 2006, at 12:30 PM, Michael Kluskens wrote: On Mar 1, 2006, at 9:56 AM, George Bosilca wrote: Now I look into this problem more and your right it's a missing interface. Somehow, it didn't get compiled. From "openmpi-1.0.1/ompi/mpi/f90/mpi-f90-interfaces.h" the

Re: [OMPI users] MPI_COMM_SPAWN f90 interface bug?

2006-03-01 Thread Michael Kluskens
On Mar 1, 2006, at 9:56 AM, George Bosilca wrote: Now I look into this problem more and your right it's a missing interface. Somehow, it didn't get compiled. From "openmpi-1.0.1/ompi/mpi/f90/mpi-f90-interfaces.h" the interface says: subroutine MPI_Comm_spawn(command, argv, maxprocs,