If you don't need to know if the data was transferred or not, then why do
you transfer it in the first place? The schema seems kind of strange, as
you don't have any clue that the data was actually transferred. Actually
without Wait and Test, you can pretty much assume you don't transfer
anything.
Hi,
I saw a typo on the FAQ page
http://www.open-mpi.org/faq/?category=mpi-apps. It says that the
variable to change the CXX compiler is OMPI_MPIXX, but it is
OMPI_MPICXX (a C is missing).
Cheers,
--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn:
Hi,
I compiled OpenMPI on a RHEL6 box with LSF support, but when I run
sonthing, it crashes. Also orte-info crashes:
Package: Open MPI mbruc...@xxx.com Distribution
Open RTE: 1.7.2
Open RTE repo revision: r28673
Open RTE release date: Jun 26, 2013
;
> On Sep 12, 2013, at 3:17 AM, Matthieu Brucher <matthieu.bruc...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I compiled OpenMPI on a RHEL6 box with LSF support, but when I run
> > sonthing, it crashes. Also orte-info crashes:
> >
> > Pac
Yes, ompi_info does not crash.
Le 15 sept. 2013 18:05, "Ralph Castain" <r...@open-mpi.org> a écrit :
> No - out of curiosity, does ompi_info work? I'm wondering if this is
> strictly an orte-info problem.
>
> On Sep 15, 2013, at 10:03 AM, Matthieu Brucher <matt
e releasing 1.7.3 shortly and it is mostly complete at this time.
>
>
> On Sep 15, 2013, at 10:43 AM, Matthieu Brucher <matthieu.bruc...@gmail.com>
> wrote:
>
> Yes, ompi_info does not crash.
> Le 15 sept. 2013 18:05, "Ralph Castain" <r...@open-mpi.org> a éc
Hi,
I tried with the latest nightly (well now it may not be the latest
anymore), and orte-info didn't crash. So I'll try again later with my
app.
thanks,
Matthieu
2013/9/15 Matthieu Brucher <matthieu.bruc...@gmail.com>:
> I can try later this week, yes.
> Thanks
>
> Le 1
Hi,
Are you sure this is the correct code? This seems strange and not a good idea:
MPI_Init(,);
// do something...
for( int i = 0 ; i < argc ; i++ ) delete [] argv[i];
delete [] argv;
Did you mean argc_new and argv_new instead?
Do you have the same error without CUDA?
Cheers,
that the fault occured at MPI_Init. The code works fine if I use
> MPI_Init(NULL,NULL) instead. The same code also compiles and runs without a
> problem on my laptop with mpich2-1.4.
>
> Best,
> Yu-Hang
>
>
>
> On Tue, Nov 12, 2013 at 11:18 AM, Matthieu Bruch
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actually a bug!
Cheers,
2013/11/12 Matthieu Brucher <matthieu.bruc...@gmail.com>:
> Interestingly enough, in ompi_mpi_init, opal_argv_join is called
> without then array length,
Hi,
Try waiting on all gathers at the same time, not one by one (this is
what non blocking collectives are made for!)
Cheers,
Matthieu
2014-04-05 10:35 GMT+01:00 Zehan Cui :
> Hi,
>
> I'm testing the non-blocking collective of OpenMPI-1.8.
>
> I have two nodes with
ied MPI_Waitall(), but the results are
> the same. It seems the communication didn't overlap with computation.
>
> Regards,
> Zehan
>
> On 4/5/14, Matthieu Brucher <matthieu.bruc...@gmail.com> wrote:
>> Hi,
>>
>> Try waiting on all gathers at the same time,
The Alltoall should only return when all data is sent and received on
the current rank, so there shouldn't be any race condition.
Cheers,
Matthieu
2014-05-08 15:53 GMT+02:00 Spenser Gilliland :
> George & other list members,
>
> I think I may have a race condition in
A simple test would be to run it with valgrind, so that out of bound
reads and writes will be obvious.
Cheers,
Matthieu
2014-05-08 21:16 GMT+02:00 Spenser Gilliland :
> George & Mattheiu,
>
>> The Alltoall should only return when all data is sent and received on
>> the
Hi,
The easiest would also to bypass the Isend as well! The standard is
clear, you need a pair of Isend/Irecv.
Cheers,
2014-07-16 14:27 GMT+01:00 Ziv Aginsky :
> I have a loop in which I will do some MPI_Isend. According to the MPI
> standard, for every send you need a
; On Wed, Jul 16, 2014 at 3:28 PM, Matthieu Brucher
> <matthieu.bruc...@gmail.com> wrote:
>>
>> Hi,
>>
>> The easiest would also to bypass the Isend as well! The standard is
>> clear, you need a pair of Isend/Irecv.
>>
>> Cheers,
>>
>> 2014-07-1
fer? Can I just
> set the request to MPI_SUCCESS for ranks which I will send zero buffer to
> and have no receive call?
> Is there any other MPI routine that can do MPI_Scatterv on selected ranks?
> without creating a new communicator?
>
>
>
>
> On Wed, Jul 16, 20
Don't forget that MPT has some optimizations OpenMPI may not have, as
"overriding" free(). This way, MPT can have a huge performance boost
if you're allocating and freeing memory, and the same happens if you
communicate often.
Matthieu
2010/12/21 Gilbert Grosdidier :
.
Just my opinion.
Matthieu Brucher
2011/12/23 Santosh Ansumali <ansum...@gmail.com>
> Dear All,
>We are running a PDE solver which is memory bound. Due to
> cache related issue, smaller number of grid point per core leads to
> better performance for this code. Thus
Hi,
You need to use the command prompt provided by Visual Studio and it will
work.
Matthieu
2012/5/18 Ghobad Zarrinchian
> Hi. I've installed Visual Studio 2008 on my machine. But i have still the
> same problem. How can i solve it? thx
>
>
> On Fri, May 11, 2012 at
Hi,
This may be because you have an error in the parallel communication
pattern. Without other information, it is difficult to say something
else. Try degugging your application.
Matthieu
2013/2/24, Mohammad Mohsenie :
> Dear All,
> Greetings,
>
> I have installed openmpi
> IF boost is attached to MPI 3 (or whatever), AND it becomes part of the
> mainstream MPI implementations, THEN you can have the discussion again.
Hi,
At the moment, I think that Boost.MPI only supports MPI1.1, and even
then, some additional work may be done, at least regarding the complex
Strange that it indicates the whole path. I had the same issue, but it
only said that orted couldn't be found. In my .bashrc, I put what it
needed to get orted in my PATH, and it worked.
Matthieu
2009/8/8 Ralph Castain :
> Not that I know of - I don't think we currently have
Hi Jack,
What you are seeking is the client/server pattern. Have one node act
as a server. It will create a list of tasks or even a graph of tasks
if you have dependencies, and then create clients that will connect to
the server with an RPC protocol (I've done this with a SOAP+TCP
protocol, the
2010/6/20 Jack Bryan :
> Hi, Matthieu:
> Thanks for your help.
> Most of your ideas show that what I want to do.
> My scheduler should be able to be called from any C++ program, which can
> put
> a list of tasks to the scheduler and then the scheduler distributes the
>
2010/6/21 Jack Bryan :
> Hi,
> thank you very much for your help.
> What is the meaning of " must find a system so that every task can be
> serialized in the same form." What is the meaning of "serize " ?
Serialize is the process of converting an object instance into a
.
My second question is about the LSF detection. lsf.h is detected, but
when lsb_launch is searched for ion libbat.so, it fails because
parse_time and parse_time_ex are not found. Is there a way to add
additional lsf libraries so that the search can be done?
Matthieu Brucher
--
Information System
2009/5/5 Jeff Squyres <jsquy...@cisco.com>:
> On May 5, 2009, at 6:10 AM, Matthieu Brucher wrote:
>
>> The first is what the support of LSF by OpenMPI means. When mpirun is
>> executed, it is an LSF job that is actually ran? Or what does it
>> imply? I've tried to
2009/5/6 Jeff Squyres <jsquy...@cisco.com>:
> On May 5, 2009, at 10:01 AM, Matthieu Brucher wrote:
>
>> > What Terry said is correct. It means that "mpirun" will use, under the
>> > covers, the "native" launching mechanism of LSF to launch
of setting the necessary environment variables and
> eventually calls the correct mpirun. (the option "-a openmpi" tells LSF that
> we're using OpenMPI so don't try to autodetect)
>
>
>
> Regards,
>
>
>
> Jeroen Kleijer
>
> On Tue, May 5, 2009 at 2:23 PM,
Hi,
I've managed to use 1.3.2 (still not with LSF and InfiniPath, I start
one step after another), but I have additional warnings that didn't
show up in 1.2.8:
[host-b:09180] mca: base: component_find: unable to open
/home/brucher/lib/openmpi/mca_ras_dash_host: file not found (ignored)
2009/5/12 Jeff Squyres :
> Or it could be that you installed 1.3.2 over 1.2.8 -- some of the 1.2.8
> components that no longer exist in the 1.3 series are still in the
> installation tree, but failed to open properly (unfortunately, libltdl gives
> an incorrect "file not found"
Thank you a lot for this.
I've just checked everything again, recompiled my code as well (I'm
using SCons so it detects that the headers and the libraries changed)
and it works without a warning.
Matthieu
2009/5/12 Jeff Squyres <jsquy...@cisco.com>:
> On May 12, 2009, at 8:17 AM,
I don't think there is anything OpenMPI can do for you here. The issue is
clearly on how you are compiling your application.
To start, you can try to compile without the --march=generic and use
something as generic as possible (i.e. only SSE2). Then if this doesn't
work for your app, do the same
Hi,
I think, on the contrary, that he did notice the AMD/ARM issue. I suppose
you haven't read the text (and I like the fact that there are different
opinions on this issue).
Matthieu
2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet :
> John,
>
>
> The technical assessment so
35 matches
Mail list logo