Or OMPI_CC=icc-xx.y mpicc ...
Jed
On Aug 12, 2010 5:18 PM, "Ralph Castain" wrote:
On Aug 12, 2010, at 7:04 PM, Michael E. Thomadakis wrote:
> On 08/12/10 18:59, Tim Prince wrote:
>>...
The "easy" way to accomplish this would be to:
(a) build OMPI with whatever compiler you decide to use as a
David Zhang wrote:
When my MPI code fails (seg fault), it usually cause the rest of the mpi
process to abort as well. Perhaps rather than calling abort(), perhaps
you could do a divide-by-zero operation to halt the program?
David Zhang
University of California, San Diego
>
On Thu, Aug 12, 201
Sounds very strange - what OMPI version, on what type of machine, and how was
it configured?
On Aug 12, 2010, at 7:49 PM, David Ronis wrote:
> I've got a mpi program that is supposed to to generate a core file if
> problems arise on any of the nodes. I tried to do this by adding a
> call to a
When my MPI code fails (seg fault), it usually cause the rest of the mpi
process to abort as well. Perhaps rather than calling abort(), perhaps you
could do a divide-by-zero operation to halt the program?
On Thu, Aug 12, 2010 at 6:49 PM, David Ronis wrote:
> I've got a mpi program that is suppo
Sorry for the late replies but with work, time zones etc…
This post has been going on for a while and in an attempt bring it to a close
I’m going to try to collapse this down to some core issues and answer all the
questions in 1 place.
Richard: yes your last statement is correct, I am just usin
I've got a mpi program that is supposed to to generate a core file if
problems arise on any of the nodes. I tried to do this by adding a
call to abort() to my exit routines but this doesn't work; I get no core
file, and worse, mpirun doesn't detect that one of my nodes has
aborted(?) and doesn't
On 8/12/2010 6:04 PM, Michael E. Thomadakis wrote:
On 08/12/10 18:59, Tim Prince wrote:
On 8/12/2010 3:27 PM, Ralph Castain wrote:
Ick - talk about confusing! I suppose there must be -some- rational
reason why someone would want to do this, but I can't imagine what
it would be
I'm no exp
On Aug 12, 2010, at 7:04 PM, Michael E. Thomadakis wrote:
> On 08/12/10 18:59, Tim Prince wrote:
>>
>> On 8/12/2010 3:27 PM, Ralph Castain wrote:
>>>
>>> Ick - talk about confusing! I suppose there must be -some- rational reason
>>> why someone would want to do this, but I can't imagine what i
On Thu, 2010-08-12 at 20:04 -0500, Michael E. Thomadakis wrote:
> The basic motive in this hypothetical situation is to build the MPI
> application ONCE and then swap run-time libs as newer compilers come out
Not building your application with the compiler you want to use sounds
like a very
On 08/12/10 18:59, Tim Prince wrote:
On 8/12/2010 3:27 PM, Ralph Castain wrote:
Ick - talk about confusing! I suppose there must be -some- rational
reason why someone would want to do this, but I can't imagine what it
would be
I'm no expert on compiler vs lib confusion, but some of my ow
On 08/12/10 17:27, Ralph Castain wrote:
Ick - talk about confusing! I suppose there must be -some- rational
reason why someone would want to do this, but I can't imagine what it
would be
I'm no expert on compiler vs lib confusion, but some of my own
experience would say that this is a ba
On 8/12/2010 3:27 PM, Ralph Castain wrote:
Ick - talk about confusing! I suppose there must be -some- rational
reason why someone would want to do this, but I can't imagine what it
would be
I'm no expert on compiler vs lib confusion, but some of my own
experience would say that this is a
Ick - talk about confusing! I suppose there must be -some- rational reason why
someone would want to do this, but I can't imagine what it would be
I'm no expert on compiler vs lib confusion, but some of my own experience would
say that this is a bad idea regardless of whether or not OMPI is
Hello OpenMPI,
we have deployed OpenMPI 1.4.1 and 1.4.2 on our Intel Nehalem cluster
using Intel compilers V 11.1.059 and 11.1.072 respectively, and one user
has the following request:
Can we build OpenMPI version say O.1 against Intel compilers version say
I.1 but then built an applicatio
Dick / all --
I just had a phone call with Ralph Castain who has had some additional off-list
mails with Randolph. Apparently, none of us understand the model that is being
used here. There are also apparently some confidentiality issues involved such
that it might be difficult to publicly st
Check "mpirun -h" and you'll see options to do exactly that...specifically, you
want -output-filename
On Aug 12, 2010, at 8:20 AM, Price, Brian M (N-KCI) wrote:
> All,
>
> Is there a simple way (environment variable or flag) to separate the output
> of each of my MPI processes into separate
All,
Is there a simple way (environment variable or flag) to separate the output of
each of my MPI processes into separate log files?
Thanks.
Brian Price
You said "separate MPI applications doing 1 to > N broadcasts over PVM".
You do not mean you are using pvm_bcast though - right?
If these N MPI applications are so independent that you could run one at a
time or run them on N different clusters and still get the result you want
(not the time
Hi,
I couldn't get any clue. Could you please provide me a small VS solution
together with source file? It might be easier if I can simply check the
callstacks. BTW: please send me off-list if the file is too big. Thanks.
Shiqing
On 2010-8-12 1:20 PM, lyb wrote:
Hi,
Some other information
Can you try this with the current trunk (r23587 or later)?
I just added a number of new features and bug fixes, and I would be interested
to see if it fixes the problem. In particular I suspect that this might be
related to the Init/Finalize bounding of the checkpoint region.
-- Josh
On Aug 10
building openmpi with option "--without-memory-manager" fix my problem.
What does it exactly imply to compile with this option ?
I guess all malloc use functions from libc instead of openmpi one, but does
it have an effect on performance or something else ?
Nicolas
2010/8/8 Nysal Jan
> What in
Hi again,
I think the problem is solved. Thanks to Gus, I've tried
mpirun -mca mpi_paffinity_alone 1
while running the program, and I've made a quick search on that, it assures
that every program works on a specific core I guess.
(correct me if I'm wrong).
I've ran over 20 tests, and now it works
Hi,
Some other information supply. the function breaks at the 3rd ASSERT.
Send you the picture. thanks
Hello,
the message is,
Unhandle exception at 0x7835b701 (mfc80ud.dll) : 0xC005: conflit
while read 0xf78e9e00.
thanks.
Hi,
I personally haven't try to program MPI with MFC, but in
Hello,
the message is,
Unhandle exception at 0x7835b701 (mfc80ud.dll) : 0xC005: conflit
while read 0xf78e9e00.
thanks.
Hi,
I personally haven't try to program MPI with MFC, but in principle it
should work. What kind of error did you get, was there any error
message? Thanks.
Shiqing
Hi Gus,
1 - first of all, turning off hyper-threading is not an option. And it gives
pretty good results if I can find a way to arrange the cores.
2 - Actually Eugene (one of her messages in this thread) had suggested to
arrange the slots.
I did and wrote the results, it delivers the cores random
for
...
rank 13=os221 slot=2
rank 14=os222 slot=2
rank 15=os224 slot=2
rank 16=os228 slot=4
rank 17=os229 slot=4
I've tried and here are the results, same thing happened.
2010-08-12 11:09:28,814 59759 DEBUG [0x7fbd3fdce740] - RANK(0) Printing
Times...
2010-08-12 11:09:28,814 59759 DEBUG [0x7fbd3fd
Hi,
I personally haven't try to program MPI with MFC, but in principle it
should work. What kind of error did you get, was there any error
message? Thanks.
Shiqing
On 2010-8-12 9:13 AM, lyb wrote:
Hi,
I have a MFC project, and need to add mpi functions in it, and
choose openmpi.
but
Hi,
I have a MFC project, and need to add mpi functions in it, and choose
openmpi.
but I searched all of mail list , not. find the answer.
And I try to call mpi functions under MFC, as follows,
int ompi_test(int *argc, char **argv)
{
int rank, size;
MPI_Init(argc, &argv);
MPI
28 matches
Mail list logo