I am using GCC 4.x:
$ pathCC -v
PathScale(TM) Compiler Suite: Version 3.2
Built on: 2008-06-16 16:41:38 -0700
Thread model: posix
GNU gcc version 4.2.0 (PathScale 3.2 driver)
$ pathCC -show-defaults
Optimization level and compilation target:
-O2 -mcpu=opteron -m64 -msse -msse2 -mno-sse3
You should probably take this up with Pathscale's support team.
On Sep 23, 2010, at 3:56 AM, Rafael Arco Arredondo wrote:
> I am using GCC 4.x:
>
> $ pathCC -v
> PathScale(TM) Compiler Suite: Version 3.2
> Built on: 2008-06-16 16:41:38 -0700
> Thread model: posix
> GNU gcc version 4.2.0
Dear all,
i'm studing the interfaces of new collective routines in next MPI-3, and
i've read that new collectives haven't any tag.
So all collective operations must follow the ordering rules for collective
calls.
>From what i understand, this means that i can't use:
MPI_IBcast(MPI_COMM_WORLD,
Dear users,
Our cluster has a number of nodes which have high probability to crash, so
it happens quite often that calculations stop due to one node getting down.
May be you know if it is possible to block the crashed nodes during run-time
when running with OpenMPI? I am asking about principal
Hi All:
I've written an openmpi program that "self schedules" the work.
The master task is in a loop chunking up an input stream and handing off
jobs to worker tasks. At first the master gives the next job to the
next highest rank. After all ranks have their first job, the master
waits via
On Sep 23, 2010, at 6:28 AM, Gabriele Fatigati wrote:
> i'm studing the interfaces of new collective routines in next MPI-3, and i've
> read that new collectives haven't any tag.
Correct.
> So all collective operations must follow the ordering rules for collective
> calls.
Also correct.
>
Mm,
to be sure, if i have one processor who does:
MPI_IBcast(MPI_COMM_WORLD, request_1) // first Bcast
MPI_IBcast(MPI_COMM_WORLD, request_2) // second Bcast
it means that i can't have another process who does the follow:
MPI_IBcast(MPI_COMM_WORLD, request_2) // firt Bcast for another process
On Sep 23, 2010, at 10:00 AM, Gabriele Fatigati wrote:
> to be sure, if i have one processor who does:
>
> MPI_IBcast(MPI_COMM_WORLD, request_1) // first Bcast
> MPI_IBcast(MPI_COMM_WORLD, request_2) // second Bcast
>
> it means that i can't have another process who does the follow:
>
>
request_1 and request_2 are just local variable names.
The only thing that determines matching order is CC issue order on the
communicator. At each process, some CC is issued first and some CC is
issued second. The first issued CC at each process will try to match the
first issued CC at the
Hi Lewis,
On Thu, Sep 23, 2010 at 9:38 AM, Lewis, Ambrose J.
wrote:
> Hi All:
>
> I’ve written an openmpi program that “self schedules” the work.
>
> The master task is in a loop chunking up an input stream and handing off
> jobs to worker tasks. At first the master
Sorry Richard,
what is CC issue order on the communicator?, in particular, "CC", what does
it mean?
2010/9/23 Richard Treumann
>
> request_1 and request_2 are just local variable names.
>
> The only thing that determines matching order is CC issue order on the
>
Hi All:
I’ve written an openmpi program that “self schedules” the work.
The master task is in a loop chunking up an input stream and handing off
jobs to worker tasks. At first the master gives the next job to the
next highest rank. After all ranks have their first job, the master
Dear Open MPI,
How essential is Open MPI's opal_sys_timer_get_cycles() function?
It apparently needs to access a timestamp register directly. That is
a trivial operation in PPC (mftb) or x86 (tsc), but the ARM processor
apparently doesn't have a similar function in its instruction set.
Is it
Hi all, I'm new in the list. I don't know if this post has been treated
before.
My question is:
Is there a way in the OMPI library to report which process is running
on which core in a SMP system? I need to know processor affinity for
optimizations issues.
Regards
Fernando Saez
CC stands for any Collective Communication operation. Every CC occurs on
some communicator.
Every CC is issued (basically the thread the call is on enters the call)
at some point in time. If two threads are issuing CC calls on the same
communicator, the issue order can become ambiguous so
Jeff and Ralph,
Thank you for your reply.
1) I'm not running on machines with OpenFabrics.
2) In my example, ompi-ps prints a maximum 82 bytes per line. Even so, I
augment to 300 bytes per line to be sure that it is not the problem.
char mystring [300];
...
fgets (mystring , 300 , pFile);
2)
That's a great suggestion...Thanks!
amb
-Original Message-
From: users-boun...@open-mpi.org on behalf of Bowen Zhou
Sent: Thu 9/23/2010 1:18 PM
To: Open MPI Users
Subject: Re: [OMPI users] "self scheduled" work & mpi receive???
> Hi All:
>
> I've written an openmpi program that
Hi Ambrose,
I'm interested in you work, i have a app to convert for myself and i don't
know enough the MPI structure and syntaxe to make it...
So if you wanna share your app i'm interested in taking a look at it!!
Thanks and have a nice day!!
Mikael Lavoie
2010/9/23 Lewis, Ambrose J.
Eloi, I am curious about your problem. Can you tell me what size of job
it is? Does it always fail on the same bcast, or same process?
Eloi Gaudry wrote:
Hi Nysal,
Thanks for your suggestions.
I'm now able to get the checksum computed and redirected to stdout, thanks (I forgot the
"-mca
ompi-ps talks to mpirun to get the info, and then pretty-prints it to
stderr. Best guess is that it is having problems contacting mpirun. Are you
running it on the same node as mpirun (a requirement, unless you pass it the
full contact info)?
Check the ompi-ps man page and also "ompi-ps -h" to
In a word, no. If a node crashes, OMPI will abort the currently-running job
if it had processes on that node. There is no current ability to "ride-thru"
such an event.
That said, there is work being done to support "ride-thru". Most of that is
in the current developer's code trunk, and more is
21 matches
Mail list logo