PS - BTW, --with-psm that you said before you are using,
refers to specific hardware (see below).

You need to check which interconnect (network)
your HPC computer uses for MPI communication.
Ask your HPC system administrator or help desk.

If it is
"QLogic InfiniPath PSM" (I don't have this one here) then you probably
want to use the  --with-psm option.
If it is standard Gigabit Ethernet or just Ethernet, you don't need to
do anything, OpenMPI will do it for you.
If it is Infiniband, you need --with-openib instead, particularly if the
Infiniband libraries are in a non-standard directory.
If it is Myrinet, there is yet another flag.
And so on.

"./configure --help" is your friend!

Having said that,
I still have stuff running here with OpenMPI 1.3.2 and it
works very well.
So, I would guess you could use your school's 1.3.2 version.
You don't have to go through the trouble of building
your own OpenMPI 1.4.2, I guess.

Gus Correa


--with-psm(=DIR) Build PSM (QLogic InfiniPath PSM) support, searching
                          for libraries in DIR
--with-psm-libdir=DIR Search for PSM (QLogic InfiniPath PSM) libraries in
                          DIR


Gus Correa wrote:
Hi Zhigang

I never used OpenFoam
(we're atmosphere/ocean/climate/Earth Science CFD practitioners here!) but I would guess it should work with any
resource manager, not necessarily with SGE.

In any case, it doesn't make much sense to configure OpenMPI with SGE,
if your university HPC uses another resource manager (RM).
You need to find out which resource manager the university
computer has,
and build OpenMPI with support for it, assuming that resource manager
is one that is supported by OpenMPI.
If you are lucky, it will be SGE.
If it is Torque, or SLURM, you can still build OpenMPI with RM support.

If you prefer, you can just leave it alone.
You can still run your OpenMPI programs with mpiexec even if you
don't build OpenMPI with resource manager support.

I would guess any resource manager will tell you the compute nodes
that it allocated to you in some way.
In Torque/PBS (which I use here) the environment variable $PBS_NODEFILE
has the list of nodes for each job.
(The list repeats the nodes' names several times, if you ask for many processors/cores on each node.)

So, even if I didn't build OpenMPI with Torque support,
I could launch mpiexec with a syntax more or less like this:

@ NP = `cat $PBS_NODEFILE | wc -l`
mpiexec -n $NP -hostfile $PBS_NODEFILE ./my_program

Not as simple as "mpiexec -n $NP ./my_program", but not
a big deal either.

Other resource managers should have something
equivalent to $PBS_NODEFILE.
You need to find out what your computer has.
In general there is a command named "qsub", or "bsub",
or something else,
and "man qsub" or "man bsub" should turn up some information.
Otherwise, you can ask your HPC support folks about it.

I hope this helps,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------

Zhigang Wei wrote:
Thank you Gus, your answer is very helpful.
I use a CFD opensource called OpenFOAM, from official build suggestions, I found something like "--with-sge",
but I just don't know if it make sense in my school 's hpc setting.
The basic question is, if simply "./configure --prefix=blahblah" works (as you have said the modern openmpi will AUTOMATICALLY detect the hard and software system configuaration), then, why should people around try to build it with "--with-sge", etc. That make those dummies like me really confused. Thanks and best Regards, Zhigang Wei
------------
NatHaz Modeling Laboratory
University of Notre Dame
112J FitzPatrick Hall
Notre Dame, IN 46556 ------------------------------------------------------------------------
2010-07-08
------------------------------------------------------------------------
*发件人:* Gus Correa
*发送时间:* 2010-07-08  11:17:26
*收件人:* Open MPI Users
*抄送:*
*主题:* Re: [OMPI users] configure options
Hi Zhigang
So, did setting the LD_LIBRARY_PATH work?
**
I don't add many options to the OpenMPI configure,
besides --prefix.
OpenMPI does a very good job searching and checking
for everything that is available and that it needs in the system.
It will build with support for nearly everything it finds
and that works.
Since you are using OpenMPI in your university HPC computer,
you may want to piggy back support from its resource manager/queue
system (e.g. Torque/PBS, --with-tm, or SGE, or SLURM).
This makes mpiexec work in cooperation with the resource manager (RM),
automatically using the nodes that were allocated by the the RM
to your job.
That is not essential, but it helps.
The same is true if there is specific hardware
(e.g. Infinband --with-openib, NUMA, --with-libnuma, etc).
You may need to point configure to the directories where these libraries
are, if they are not in standard locations, it depends on your system.
Do ./configure --help for a list of options.
Also, consult the OpenMPI FAQ, which is the best resource to answer
many of your questions:
http://www.open-mpi.org/faq/
One way to check what configuration options OpenMPI is really using,
is to redirect the configure output to a file, and inspect it to see if
everything you want is there.
I hope this helps,
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
Zhigang Wei wrote:
> Hi, thanks, the LD_LIBRARY_PATH has been set, and I checked again, and I > don't think there is a confict. > > May I ask you a question, how do you normally configure your openmpi? > > I guess you will not use simply "./configure --prefix=blahblah", pls > correct me if I am wrong. > > So, what is your procedure to check your system hardware and software > background, so as to make openmpi correctly built.
 >   > That's my question.
 >   > And Thank you, Gus
 >   >   >   > Zhigang Wei
 > ------------
 > NatHaz Modeling Laboratory
 > University of Notre Dame
 > 112J FitzPatrick Hall
> Notre Dame, IN 46556 > ------------------------------------------------------------------------
 > 2010-07-08
> ------------------------------------------------------------------------
 > *发件人:* Gus Correa
 > *发送时间:* 2010-07-08  10:07:13
 > *收件人:* Open MPI Users
 > *抄送:*
 > *主题:* Re: [OMPI users] configure options
 > Hi Zhigang
 > Are  you talking about a run time failure?
> If you are, I think what is missing is just to set the PATH and the > LD_LIBRARY_PATH environment variables to point to the OpenMPI directories.
 > This can be done in your .[t]cshrc / .profile / .bashrc
> file in your home directory (assuming it is accessible by all computers
 > that you're using to run the program).
 > Hopefully it will override the default OpenMPI 1.3.2 in your HPC.
 > See this FAQ:
 > http://www.open-mpi.org/faq/?category=running#adding-ompi-to-path
 > There are other ways to do it, which you can find with
 > "man $MY_OWN_DIR/share/man/man1/mpiexec".
 > (You could also set MANPATH to get the right man pages.)
 > I hope this helps,
 > Gus Correa
 > ---------------------------------------------------------------------
 > Gustavo Correa
 > Lamont-Doherty Earth Observatory - Columbia University
 > Palisades, NY, 10964-8000 - USA
 > ---------------------------------------------------------------------
 > Zhigang Wei wrote:
 >  > Dear all,
> > > > How can I decide the configure options? I am greatly confused.
 >  >   >  > I am using school's high performance computer.
> > But the openmpi there is version 1.3.2, old, so I want to build the new one. > > > > I am new to openmpi, I have built the openmpi and it doesn't work, I > > built and installed it to my own directory.
 >  > I use the following configure options,
 >  >   >  > ./configure --with-sge --prefix=$MY_OWN_DIR --with-psm
 >  >   >  > but it won't work and failed with somelines like
 >  > ......lib/openmpi/mca_ess_hnp: file not found (ignored)
 >  > in the output file.
> > > > I guess my configure is wrong, could you tell me the meaning of > > --with-psm, --with-sge, do I need to add other options? I guess the > > computing nodes are using infiniband, but how to build with that? If I > > don't have the su right, can I build it? What should I pay attettion if > > I want to build and use my own openmpi? > > > > You see, in a personal multicore computer, building is so easy and > > mpirun the program without any problems. But in school's hpc, it fails > > all the time.
 >  >   >  > Please help. Thank you all.
 >  >   >  >   >  > Zhigang Wei
 >  > ------------
 >  > NatHaz Modeling Laboratory
 >  > University of Notre Dame
 >  > 112J FitzPatrick Hall
> > Notre Dame, IN 46556 > > ------------------------------------------------------------------------
 >  > 2010-07-07
> > > > > > ------------------------------------------------------------------------
 >  >  >  > _______________________________________________
 >  > users mailing list
 >  > us...@open-mpi.org
 >  > http://www.open-mpi.org/mailman/listinfo.cgi/users
 > _______________________________________________
 > users mailing list
 > us...@open-mpi.org
 > http://www.open-mpi.org/mailman/listinfo.cgi/users
 > .
> > > ------------------------------------------------------------------------
 >  > _______________________________________________
 > users mailing list
 > us...@open-mpi.org
 > http://www.open-mpi.org/mailman/listinfo.cgi/users
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


------------------------------------------------------------------------

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to