That’s OK.  Many of us make that mistake, though often as a typo.
One thing that helps is that the correct spelling of Open MPI has a space in 
it, but OpenMP does not.
If not aware what OpenMP is, here is a link: http://openmp.org/wp/

What makes it more confusing is that more and more apps. offer the option of 
running in a hybrid mode, such as WRF, with OpenMP threads running over MPI 
ranks with the same executable.  And sometimes that MPI is Open MPI.

Cheers,
-Tom

From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Djordje Romanic
Sent: Monday, April 14, 2014 1:28 PM
To: Open MPI Users
Subject: Re: [OMPI users] mpirun runs in serial even I set np to several 
processors

OK guys... Thanks for all this info. Frankly, I didn't know these diferences 
between OpenMP and OpenMPI. The commands:
which mpirun
which mpif90
which mpicc
give,
/usr/bin/mpirun
/usr/bin/mpif90
/usr/bin/mpicc
respectively.
A tutorial on how to compile WRF 
(http://www.mmm.ucar.edu/wrf/OnLineTutorial/compilation_tutorial.php) provides 
a test program to test MPI. I ran the program and it gave me the output of 
successful run, which is:
---------------------------------------------
C function called by Fortran
Values are xx = 2.00 and ii = 1
status = 2
SUCCESS test 2 fortran + c + netcdf + mpi
---------------------------------------------
It uses mpif90 and mpicc for compiling. Below is the output of 'ldd ./wrf.exe':

    linux-vdso.so.1 =>  (0x00007fff584e7000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x00007f4d160ab000)
    libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 
(0x00007f4d15d94000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4d15a97000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f4d15881000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4d154c1000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f4d162e8000)
    libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 
(0x00007f4d1528a000)

On Mon, Apr 14, 2014 at 4:09 PM, Gus Correa 
<g...@ldeo.columbia.edu<mailto:g...@ldeo.columbia.edu>> wrote:
Djordje

Your WRF configure file seems to use mpif90 and mpicc (line 115 & following).
In addition, it also seems to have DISABLED OpenMP (NO TRAILING "I")
(lines 109-111, where OpenMP stuff is commented out).
So, it looks like to me your intent was to compile with MPI.

Whether it is THIS MPI (OpenMPI) or another MPI (say MPICH, or MVAPICH,
or Intel MPI, or Cray, or ...) only your environment can tell.

What do you get from these commands:

which mpirun
which mpif90
which mpicc

I never built WRF here (but other people here use it).
Which input do you provide to the command that generates the configure
script that you sent before?
Maybe the full command line will shed some light on the problem.


I hope this helps,
Gus Correa

On 04/14/2014 03:11 PM, Djordje Romanic wrote:
to get help :)



On Mon, Apr 14, 2014 at 3:11 PM, Djordje Romanic 
<djord...@gmail.com<mailto:djord...@gmail.com>
<mailto:djord...@gmail.com<mailto:djord...@gmail.com>>> wrote:

    Yes, but I was hoping to get. :)


    On Mon, Apr 14, 2014 at 3:02 PM, Jeff Squyres (jsquyres)
    <jsquy...@cisco.com<mailto:jsquy...@cisco.com> 
<mailto:jsquy...@cisco.com<mailto:jsquy...@cisco.com>>> wrote:

        If you didn't use Open MPI, then this is the wrong mailing list
        for you.  :-)

        (this is the Open MPI users' support mailing list)


        On Apr 14, 2014, at 2:58 PM, Djordje Romanic 
<djord...@gmail.com<mailto:djord...@gmail.com>
        <mailto:djord...@gmail.com<mailto:djord...@gmail.com>>> wrote:

         > I didn't use OpenMPI.
         >
         >
         > On Mon, Apr 14, 2014 at 2:37 PM, Jeff Squyres (jsquyres)
        <jsquy...@cisco.com<mailto:jsquy...@cisco.com> 
<mailto:jsquy...@cisco.com<mailto:jsquy...@cisco.com>>> wrote:
         > This can also happen when you compile your application with
        one MPI implementation (e.g., Open MPI), but then mistakenly use
        the "mpirun" (or "mpiexec") from a different MPI implementation
        (e.g., MPICH).
         >
         >
         > On Apr 14, 2014, at 2:32 PM, Djordje Romanic
        <djord...@gmail.com<mailto:djord...@gmail.com> 
<mailto:djord...@gmail.com<mailto:djord...@gmail.com>>> wrote:
         >
         > > I compiled it with: x86_64 Linux, gfortran compiler with
        gcc   (dmpar). dmpar - distributed memory option.
         > >
         > > Attached is the self-generated configuration file. The
        architecture specification settings start at line 107. I didn't
        use Open MPI (shared memory option).
         > >
         > >
         > > On Mon, Apr 14, 2014 at 1:23 PM, Dave Goodell (dgoodell)
        <dgood...@cisco.com<mailto:dgood...@cisco.com> 
<mailto:dgood...@cisco.com<mailto:dgood...@cisco.com>>> wrote:
         > > On Apr 14, 2014, at 12:15 PM, Djordje Romanic
        <djord...@gmail.com<mailto:djord...@gmail.com> 
<mailto:djord...@gmail.com<mailto:djord...@gmail.com>>> wrote:
         > >
         > > > When I start wrf with mpirun -np 4 ./wrf.exe, I get this:
         > > > -------------------------------------------------
         > > >  starting wrf task            0  of            1
         > > >  starting wrf task            0  of            1
         > > >  starting wrf task            0  of            1
         > > >  starting wrf task            0  of            1
         > > > -------------------------------------------------
         > > > This indicates that it is not using 4 processors, but 1.
         > > >
         > > > Any idea what might be the problem?
         > >
         > > It could be that you compiled WRF with a different MPI
        implementation than you are using to run it (e.g., MPICH vs.
        Open MPI).
         > >
         > > -Dave
         > >
         > > _______________________________________________
         > > users mailing list
         > > us...@open-mpi.org<mailto:us...@open-mpi.org> 
<mailto:us...@open-mpi.org<mailto:us...@open-mpi.org>>

         > > http://www.open-mpi.org/mailman/listinfo.cgi/users
         > >
         > > <configure.wrf>_______________________________________________
         > > users mailing list
         > > us...@open-mpi.org<mailto:us...@open-mpi.org> 
<mailto:us...@open-mpi.org<mailto:us...@open-mpi.org>>

         > > http://www.open-mpi.org/mailman/listinfo.cgi/users
         >
         >
         > --
         > Jeff Squyres
         > jsquy...@cisco.com<mailto:jsquy...@cisco.com> 
<mailto:jsquy...@cisco.com<mailto:jsquy...@cisco.com>>

         > For corporate legal information go to:
        http://www.cisco.com/web/about/doing_business/legal/cri/
         >
         > _______________________________________________
         > users mailing list
         > us...@open-mpi.org<mailto:us...@open-mpi.org> 
<mailto:us...@open-mpi.org<mailto:us...@open-mpi.org>>

         > http://www.open-mpi.org/mailman/listinfo.cgi/users
         >
         > _______________________________________________
         > users mailing list
         > us...@open-mpi.org<mailto:us...@open-mpi.org> 
<mailto:us...@open-mpi.org<mailto:us...@open-mpi.org>>

         > http://www.open-mpi.org/mailman/listinfo.cgi/users


        --
        Jeff Squyres
        jsquy...@cisco.com<mailto:jsquy...@cisco.com> 
<mailto:jsquy...@cisco.com<mailto:jsquy...@cisco.com>>

        For corporate legal information go to:
        http://www.cisco.com/web/about/doing_business/legal/cri/

        _______________________________________________
        users mailing list
        us...@open-mpi.org<mailto:us...@open-mpi.org> 
<mailto:us...@open-mpi.org<mailto:us...@open-mpi.org>>

        http://www.open-mpi.org/mailman/listinfo.cgi/users





_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to