On Mon, 2008-09-29 at 17:30 -0500, Zhiliang Hu wrote:
> >As you blank out some addresses: have the nodes and the headnode one
> >or two network cards installed? All the names like node001 et al. are
> >known on neach node by the correct address? I.e. 172.16.100.1 = node001?
> >
> >-- Reuti
>
Am 30.09.2008 um 00:30 schrieb Zhiliang Hu:
At 12:10 AM 9/30/2008 +0200, you wrote:
Can you please try this jobscript instead:
#!/bin/sh
set | grep PBS
/path/to/mpirun /path/to/my_program
All should be handled by Open MPI automatically. With the "set"
bash
command you will get a list with
At 12:10 AM 9/30/2008 +0200, you wrote:
>>Can you please try this jobscript instead:
>>
>>#!/bin/sh
>>set | grep PBS
>>/path/to/mpirun /path/to/my_program
>>
>>All should be handled by Open MPI automatically. With the "set"
>>bash
>>command you will get a
Am 29.09.2008 um 23:15 schrieb Doug Reeder:
It sounds like you may not have setup paswordless ssh between all
your nodes.
If you have a tight intergration of Open MPI and use the task manager
from torque this shiouldn't be necessary.
Continued below...
Doug Reeder
On Sep 29, 2008, at
References: <200809290102.m8t12ic5022...@despam-11.iastate.edu> <5118_1222651029_m8t1h7c9014112_297d3668-bbfa-480c-8aa3-4dfe9a7dc...@lanl.gov>
<200809290207.m8t27hg6030...@despam-10.iastate.edu> <19464_1222702229_m8tfursa024528_a4205240-a331-4854-b32c-bfb27b24d...@cisco.com>
At 02:15 PM 9/29/2008 -0700, you wrote:
>It sounds like you may not have setup paswordless ssh between all
>your nodes.
>
>Doug Reeder
That's not the case. paswordless ssh is set up and it works fine.
-- that's how I can do "mpirun -np 6 -machinefiles .." fine.
Zhiliang
>On Sep 29,
It sounds like you may not have setup paswordless ssh between all
your nodes.
Doug Reeder
On Sep 29, 2008, at 2:12 PM, Zhiliang Hu wrote:
At 10:45 PM 9/29/2008 +0200, you wrote:
Am 29.09.2008 um 22:33 schrieb Zhiliang Hu:
At 07:37 PM 9/29/2008 +0200, Reuti wrote:
"-l nodes=6:ppn=2" is
At 10:45 PM 9/29/2008 +0200, you wrote:
>Am 29.09.2008 um 22:33 schrieb Zhiliang Hu:
>
>>At 07:37 PM 9/29/2008 +0200, Reuti wrote:
>>
"-l nodes=6:ppn=2" is all I have to specify the node requests:
>>>
>>>this might help: http://www.open-mpi.org/faq/?category=tm
>>
>>Essentially the examples
Am 29.09.2008 um 22:33 schrieb Zhiliang Hu:
At 07:37 PM 9/29/2008 +0200, Reuti wrote:
"-l nodes=6:ppn=2" is all I have to specify the node requests:
this might help: http://www.open-mpi.org/faq/?category=tm
Essentially the examples given on this web is no difference from
what I did.
At 07:37 PM 9/29/2008 +0200, Reuti wrote:
>>"-l nodes=6:ppn=2" is all I have to specify the node requests:
>
>this might help: http://www.open-mpi.org/faq/?category=tm
Essentially the examples given on this web is no difference from what I did.
Only thing new is, I suppose "qsub -I " is for
Hi,
> A) The execution time in case "1" should be smaller (only sm
> communication, no?) than case "2" and "3", no? Cache problems?
Shot in the dark from working on Sun T1 (also 8 real cores): from time
to time the OS wants to do something (interrupt handling, wake up
cron, ...). Leaving one or
Jed Brown escribió:
On Mon 2008-09-29 20:30, Leonardo Fialho wrote:
1) If I use one node (8 cores) the "user" % is around 100% per core. The
execution time is around 430 seconds.
2) If I use 2 nodes (4 cores in each node) the "user" % is around 95%
per core and the "sys" % is 5%. The
On Mon 2008-09-29 20:30, Leonardo Fialho wrote:
> 1) If I use one node (8 cores) the "user" % is around 100% per core. The
> execution time is around 430 seconds.
>
> 2) If I use 2 nodes (4 cores in each node) the "user" % is around 95%
> per core and the "sys" % is 5%. The execution time is
Hi All,
I´m doing some probes in a multi core (8 cores per node) machine with
NAS benchmarks. Something that I consider strange is occurring...
I´m using only one NIC and paffinity:
./bin/mpirun
-n 8
--hostfile ./hostfile
--mca mpi_paffinity_alone 1
--mca btl_tcp_if_include eth1
Gabriele,
If you want to write portable MPI applications then I think you should
stick with what the standard say. In other words for hybrid
programming, and if only one thread will make MPI calls you need
either MPI_THREAD_FUNNELED or MPI_THREAD_SERIALIZED.
Now, that being said, I don't
Hi,
Am 29.09.2008 um 19:06 schrieb Zhiliang Hu:
At 06:55 PM 9/29/2008 +0200, Reuti wrote:
Am 29.09.2008 um 18:27 schrieb Zhiliang Hu:
How you run that command line from *inside a Torque* job?
-- I am only a poor biologist, reading through the manuals/
tutorials but still don't have good
Hello!
I'm trying to build OpenMPI on NetBSD 4.99.72,
I'm getting next message either when I'm building in debug mode
or without it:
[asau.local:27880] [NO-NAME] ORTE_ERROR_LOG: Not found in file
runtime/orte_init_stage1.c at line 182
At 06:55 PM 9/29/2008 +0200, Reuti wrote:
>Am 29.09.2008 um 18:27 schrieb Zhiliang Hu:
>
>>How you run that command line from *inside a Torque* job?
>>
>>-- I am only a poor biologist, reading through the manuals/ tutorials but
>>still don't have good clues... (thanks in advance ;-)
>
>What is
I am the "system admin" here (so far so good on several servers over several
years but this PBS thing appear to be daunting ;-)
I suppose **run ... from *inside a Torque*** is to run things with a PBS
script. I thought "qsub -l nodes=6:ppn=2 mpirun ..." already bring things into
a PBS
Am 29.09.2008 um 18:27 schrieb Zhiliang Hu:
How you run that command line from *inside a Torque* job?
-- I am only a poor biologist, reading through the manuals/
tutorials but still don't have good clues... (thanks in advance ;-)
What is the content of your jobscript? Did you request more
On Sep 29, 2008, at 12:27 PM, Zhiliang Hu wrote:
How you run that command line from *inside a Torque* job?
-- I am only a poor biologist, reading through the manuals/tutorials
but still don't have good clues... (thanks in advance ;-)
Ah, gotcha.
I'm guessing that you're running OMPI
How you run that command line from *inside a Torque* job?
-- I am only a poor biologist, reading through the manuals/tutorials but still
don't have good clues... (thanks in advance ;-)
Zhiliang
At 11:48 AM 9/29/2008 -0400, you wrote:
>We need to see that command line from *inside a Torque*
We need to see that command line from *inside a Torque* job. That's
the only place where those PBS_* environment variables will exists --
OMPI's mpirun should be seeing these environment variables (when
inside a Torque job) and then reacting to them by using the Torque
native launcher,
At 11:29 AM 9/29/2008 -0400, Jeff Squyres wrote:
>On Sep 28, 2008, at 10:07 PM, Zhiliang Hu wrote:
>
>>Indeed as you expected, "printenv | grep PBS" produced nothing.
>
>Are you *sure*? I find it very hard to believe that if you run that
>command ***in a Torque job*** that you will get no
On Sep 28, 2008, at 10:07 PM, Zhiliang Hu wrote:
Indeed as you expected, "printenv | grep PBS" produced nothing.
Are you *sure*? I find it very hard to believe that if you run that
command ***in a Torque job*** that you will get no output. Torque
would have to be *seriously* misbehaving
Hi George.
So is it dangerous to use hybrid program ( MPI+OpenMP) without enable
threads support?
2008/9/29 George Bosilca
> Gabriele,
>
> The thread support has to be explicitly requested at build time, or it will
> be disabled. Add --enable-mpi-threads (configure --help
Gabriele,
The thread support has to be explicitly requested at build time, or it
will be disabled. Add --enable-mpi-threads (configure --help will give
you more info) to your configure. If you plan to use threads with Open
MPI I strongly suggest to update to the 1.3. This version is not
Another question about MPI_INIT_THREAD.
At the moment, as said, my OpenMPi version supports only level 0:
MPI_THREAD_SINGLE. Suppose that i have this code:
#pragma omp barrier
#pragma omp master
MPI_Send(buf,...);
#pragma omp barrier
Due my OpenMPI configuration, is it dangerous use this
Dear OpenMPi developers,
i've noted that OpenMPI 1.25 and 1.2.6 version doesn't supports thread
initialization function shows below:
int MPI_Init_thread(int *argc, char *((*argv)[]), int required, int
*provided)
used in hybrid programming MPI+OpenMP. Returned vales provided is 0, so the
unique
Hi Zhiliang
This has nothing to do with how you configured Open MPI. The issue is
that your Torque queue manager isn't setting the expected environment
variables to tell us the allocation. I'm not sure why it wouldn't be
doing so, and I'm afraid I'm not enough of a Torque person to know
30 matches
Mail list logo