Dear OpenMPI experts
I would like to experiment with the OpenMPI tuned collectives,
hoping to improve the performance of some programs we run
in production mode.
However, I could not find any documentation on how to select the
different collective algorithms and other parameters.
In particular,
mpirun --display-allocation --display-map
Run a batch job that just prints out $PBS_NODEFILE. I'll bet that it
isn't what we are expecting, and that the problem comes from it.
In a Torque environment, we read that file to get the list of nodes
and #slots/node that are allocated to your job.
On Jul 22, 2009, at 1:37 PM, Jeff Squyres (jsquyres) wrote:
Yep, that works.
I should clarify -- that *probably* works.
The .mod file are essentially precompiled headers. Assuming that all
the data types and sizes are the same between gfortran and ifort, you
should be ok. Many of OMPI'
Yep, that works.
I'm glad that our txt files and "look at argv[0]" scheme was useful in
the real world! (we designed it with uses almost exactly like this in
mind)
On Jul 20, 2009, at 1:47 PM, Martin Siegert wrote:
Hi,
I want to avoid separate MPI distributions since we compile many
MP
On Jul 20, 2009, at 9:09 AM, Dave Love wrote:
> you should compile openmpi with each pf intel and gfortran seperatly
> and install each of them in a separate location, and use mpi-
selector
> to select one.
What, precisely, requires that, at least if you can recompile the MPI
program with app
On Jul 20, 2009, at 9:03 AM, Dave Love wrote:
> Hmmm...there should be messages on both the user and devel lists
> regarding binary compatibility at the MPI level being promised for
> 1.3.2 and beyond.
This is confusing. As I read the quotes below, recompilation is
necessary, and the announcem
On Jul 22, 2009, at 10:05 AM, vipin kumar wrote:
Actually requirement is how a C/C++ program running in "master" node
should find out whether "slave" node is reachable (as we check this
using "ping" command) or not ? Because IP address may change at any
time, that's why I am trying to achie
Hi Jeff,
Thanks for your response.
Actually requirement is how a C/C++ program running in "master" node should
find out whether "slave" node is reachable (as we check this using "ping"
command) or not ? Because IP address may change at any time, that's why I am
trying to achieve this using "host
I'm not sure what you mean. Open MPI uses the hostname of the machine
for general identification purposes. That may be the same (or not)
from the resolved name that comes back for a given IP interface.
What are you trying to check, exactly?
On Jul 16, 2009, at 1:56 AM, vipin kumar wrote:
On Wed, Jul 22, 2009 at 4:41 PM, Daniël
Mantione wrote:
>
>
> On Wed, 22 Jul 2009, Lee Amy wrote:
>
>> Thanks. I have use your Makefile to recompile. However, I still
>> encounter some odd problem.
>>
>> I have attached the make output and Makefile.
>
> I see nothing wrong with the make output?
>
>
On Wed, 22 Jul 2009, Lee Amy wrote:
> Thanks. I have use your Makefile to recompile. However, I still
> encounter some odd problem.
>
> I have attached the make output and Makefile.
I see nothing wrong with the make output?
Daniël Mantione
Hi Jody
As I'm new at linux it was much simpler for me to use default Fedora yum
installer and the latest version accessible with it is still 1.2.4.
I've installed the latest 1.3.3 version as you advised and that warning
disappeared. Still don't know how and why but the problem now is solved.
Si
Hi Alexey
I don't know how this error messgae comes about,
but have you ever considered using a newer version of Open MPI?
1.2.4 is quite ancient, the current version is 1.3.3
http://www.open-mpi.org/software/ompi/v1.3/
Jody
On Wed, Jul 22, 2009 at 9:17 AM, Alexey Sokolov wrote:
> Hi
>
>
Hi
I faced a warning "declaration ‘struct MPI::Grequest_intercept_t’ does
not declare anything" using openmpi 1.2.4 (compiling under Fedora 10
with mpic++ wrapper over gcc 4.3.2) and don't know how to solve it.
Browsing the Internet i've found an advise just to ignore it, but i
don't think it is i
On Wed, Jul 22, 2009 at 2:53 PM, Daniël
Mantione wrote:
>
>
> On Wed, 22 Jul 2009, Lee Amy wrote:
>
>> Dear sir,
>>
>> Thank you very much. I have compiled HPL successfully. But when I
>> start up xhpl program I encountered such problem.
>>
>> mpirun noticed that job rank 0 with PID 15416 on node n
On Wed, 22 Jul 2009, Lee Amy wrote:
> Dear sir,
>
> Thank you very much. I have compiled HPL successfully. But when I
> start up xhpl program I encountered such problem.
>
> mpirun noticed that job rank 0 with PID 15416 on node node101 exited
> on signal 11 (Segmentation fault).
>
> Could you
On Wed, Jul 22, 2009 at 2:20 PM, Daniël
Mantione wrote:
>
>
> On Wed, 22 Jul 2009, Lee Amy wrote:
>
>> Hi,
>>
>> I'm going to compile HPL by using OpenMPI-1.2.4. Here's my
>> Make.Linux_ATHLON_CBLAS file.
>
> GotoBLAS needs to be called as Fortran BLAS, so you need to switch from
> CBLAS to FBLAS.
On Wed, 22 Jul 2009, Lee Amy wrote:
> Hi,
>
> I'm going to compile HPL by using OpenMPI-1.2.4. Here's my
> Make.Linux_ATHLON_CBLAS file.
GotoBLAS needs to be called as Fortran BLAS, so you need to switch from
CBLAS to FBLAS.
Daniël Mantione
Hi,
I'm going to compile HPL by using OpenMPI-1.2.4. Here's my
Make.Linux_ATHLON_CBLAS file.
# ##
#
# --
# - shell --
19 matches
Mail list logo