I think, on the contrary, that he did notice the AMD/ARM issue. I suppose
you haven't read the text (and I like the fact that there are different
opinions on this issue).
2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet :
> The technical assessment so
I don't think there is anything OpenMPI can do for you here. The issue is
clearly on how you are compiling your application.
To start, you can try to compile without the --march=generic and use
something as generic as possible (i.e. only SSE2). Then if this doesn't
work for your app, do the same
If you don't need to know if the data was transferred or not, then why do
you transfer it in the first place? The schema seems kind of strange, as
you don't have any clue that the data was actually transferred. Actually
without Wait and Test, you can pretty much assume you don't transfer
I think you have to call either Wait or Test to make the communications
move forward in the general case. Some hardware may have a hardware thread
that makes the communication, but usually you have to make it "advance"
yourself by either calling Wait ot Test.
fer? Can I just
> set the request to MPI_SUCCESS for ranks which I will send zero buffer to
> and have no receive call?
> Is there any other MPI routine that can do MPI_Scatterv on selected ranks?
> without creating a new communicator?
> On Wed, Jul 16, 20
; On Wed, Jul 16, 2014 at 3:28 PM, Matthieu Brucher
> <matthieu.bruc...@gmail.com> wrote:
>> The easiest would also to bypass the Isend as well! The standard is
>> clear, you need a pair of Isend/Irecv.
The easiest would also to bypass the Isend as well! The standard is
clear, you need a pair of Isend/Irecv.
2014-07-16 14:27 GMT+01:00 Ziv Aginsky :
> I have a loop in which I will do some MPI_Isend. According to the MPI
> standard, for every send you need a
A simple test would be to run it with valgrind, so that out of bound
reads and writes will be obvious.
2014-05-08 21:16 GMT+02:00 Spenser Gilliland :
> George & Mattheiu,
>> The Alltoall should only return when all data is sent and received on
The Alltoall should only return when all data is sent and received on
the current rank, so there shouldn't be any race condition.
2014-05-08 15:53 GMT+02:00 Spenser Gilliland :
> George & other list members,
> I think I may have a race condition in
ied MPI_Waitall(), but the results are
> the same. It seems the communication didn't overlap with computation.
> On 4/5/14, Matthieu Brucher <matthieu.bruc...@gmail.com> wrote:
>> Try waiting on all gathers at the same time,
Try waiting on all gathers at the same time, not one by one (this is
what non blocking collectives are made for!)
2014-04-05 10:35 GMT+01:00 Zehan Cui :
> I'm testing the non-blocking collective of OpenMPI-1.8.
> I have two nodes with
It seems that argv[argc] should always be NULL according to the
standard. So OMPI failure is not actually a bug!
2013/11/12 Matthieu Brucher <matthieu.bruc...@gmail.com>:
> Interestingly enough, in ompi_mpi_init, opal_argv_join is called
> without then array length,
that the fault occured at MPI_Init. The code works fine if I use
> MPI_Init(NULL,NULL) instead. The same code also compiles and runs without a
> problem on my laptop with mpich2-1.4.
> On Tue, Nov 12, 2013 at 11:18 AM, Matthieu Bruch
Are you sure this is the correct code? This seems strange and not a good idea:
// do something...
for( int i = 0 ; i < argc ; i++ ) delete  argv[i];
delete  argv;
Did you mean argc_new and argv_new instead?
Do you have the same error without CUDA?
I tried with the latest nightly (well now it may not be the latest
anymore), and orte-info didn't crash. So I'll try again later with my
2013/9/15 Matthieu Brucher <matthieu.bruc...@gmail.com>:
> I can try later this week, yes.
> Le 1
e releasing 1.7.3 shortly and it is mostly complete at this time.
> On Sep 15, 2013, at 10:43 AM, Matthieu Brucher <matthieu.bruc...@gmail.com>
> Yes, ompi_info does not crash.
> Le 15 sept. 2013 18:05, "Ralph Castain" <r...@open-mpi.org> a éc
Yes, ompi_info does not crash.
Le 15 sept. 2013 18:05, "Ralph Castain" <r...@open-mpi.org> a écrit :
> No - out of curiosity, does ompi_info work? I'm wondering if this is
> strictly an orte-info problem.
> On Sep 15, 2013, at 10:03 AM, Matthieu Brucher <matt
> On Sep 12, 2013, at 3:17 AM, Matthieu Brucher <matthieu.bruc...@gmail.com>
> > Hi,
> > I compiled OpenMPI on a RHEL6 box with LSF support, but when I run
> > sonthing, it crashes. Also orte-info crashes:
> > Pac
I compiled OpenMPI on a RHEL6 box with LSF support, but when I run
sonthing, it crashes. Also orte-info crashes:
Package: Open MPI mbruc...@xxx.com Distribution
Open RTE: 1.7.2
Open RTE repo revision: r28673
Open RTE release date: Jun 26, 2013
I saw a typo on the FAQ page
http://www.open-mpi.org/faq/?category=mpi-apps. It says that the
variable to change the CXX compiler is OMPI_MPIXX, but it is
OMPI_MPICXX (a C is missing).
Information System Engineer, Ph.D.
This may be because you have an error in the parallel communication
pattern. Without other information, it is difficult to say something
else. Try degugging your application.
2013/2/24, Mohammad Mohsenie :
> Dear All,
> I have installed openmpi
You need to use the command prompt provided by Visual Studio and it will
2012/5/18 Ghobad Zarrinchian
> Hi. I've installed Visual Studio 2008 on my machine. But i have still the
> same problem. How can i solve it? thx
> On Fri, May 11, 2012 at
Just my opinion.
2011/12/23 Santosh Ansumali <ansum...@gmail.com>
> Dear All,
>We are running a PDE solver which is memory bound. Due to
> cache related issue, smaller number of grid point per core leads to
> better performance for this code. Thus
other. This is what MPI_Init is used for.
2011/12/14 Dmitry N. Mikushin <maemar...@gmail.com>
> Dear colleagues,
> For GPU Winter School powered by Moscow State University cluster
> "Lomonosov", the OpenMPI 1.7 was built to test and populariz
Don't forget that MPT has some optimizations OpenMPI may not have, as
"overriding" free(). This way, MPT can have a huge performance boost
if you're allocating and freeing memory, and the same happens if you
2010/12/21 Gilbert Grosdidier :
2010/6/21 Jack Bryan :
> thank you very much for your help.
> What is the meaning of " must find a system so that every task can be
> serialized in the same form." What is the meaning of "serize " ?
Serialize is the process of converting an object instance into a
2010/6/20 Jack Bryan :
> Hi, Matthieu:
> Thanks for your help.
> Most of your ideas show that what I want to do.
> My scheduler should be able to be called from any C++ program, which can
> a list of tasks to the scheduler and then the scheduler distributes the
What you are seeking is the client/server pattern. Have one node act
as a server. It will create a list of tasks or even a graph of tasks
if you have dependencies, and then create clients that will connect to
the server with an RPC protocol (I've done this with a SOAP+TCP
You can try MPE (free) or Vampir (not free, but can be integrated
2009/9/29 Rahul Nabar :
> I have a code that seems to run about 40% faster when I bond together
> twin eth interfaces. The question, of course, arises: is it really
> producing so
Strange that it indicates the whole path. I had the same issue, but it
only said that orted couldn't be found. In my .bashrc, I put what it
needed to get orted in my PATH, and it worked.
2009/8/8 Ralph Castain :
> Not that I know of - I don't think we currently have
> IF boost is attached to MPI 3 (or whatever), AND it becomes part of the
> mainstream MPI implementations, THEN you can have the discussion again.
At the moment, I think that Boost.MPI only supports MPI1.1, and even
then, some additional work may be done, at least regarding the complex
Thank you a lot for this.
I've just checked everything again, recompiled my code as well (I'm
using SCons so it detects that the headers and the libraries changed)
and it works without a warning.
2009/5/12 Jeff Squyres <jsquy...@cisco.com>:
> On May 12, 2009, at 8:17 AM,
2009/5/12 Jeff Squyres :
> Or it could be that you installed 1.3.2 over 1.2.8 -- some of the 1.2.8
> components that no longer exist in the 1.3 series are still in the
> installation tree, but failed to open properly (unfortunately, libltdl gives
> an incorrect "file not found"
I've managed to use 1.3.2 (still not with LSF and InfiniPath, I start
one step after another), but I have additional warnings that didn't
show up in 1.2.8:
[host-b:09180] mca: base: component_find: unable to open
/home/brucher/lib/openmpi/mca_ras_dash_host: file not found (ignored)
of setting the necessary environment variables and
> eventually calls the correct mpirun. (the option "-a openmpi" tells LSF that
> we're using OpenMPI so don't try to autodetect)
> Jeroen Kleijer
> On Tue, May 5, 2009 at 2:23 PM,
2009/5/6 Jeff Squyres <jsquy...@cisco.com>:
> On May 5, 2009, at 10:01 AM, Matthieu Brucher wrote:
>> > What Terry said is correct. It means that "mpirun" will use, under the
>> > covers, the "native" launching mechanism of LSF to launch
2009/5/5 Jeff Squyres <jsquy...@cisco.com>:
> On May 5, 2009, at 6:10 AM, Matthieu Brucher wrote:
>> The first is what the support of LSF by OpenMPI means. When mpirun is
>> executed, it is an LSF job that is actually ran? Or what does it
>> imply? I've tried to
My second question is about the LSF detection. lsf.h is detected, but
when lsb_launch is searched for ion libbat.so, it fails because
parse_time and parse_time_ex are not found. Is there a way to add
additional lsf libraries so that the search can be done?
Mail list logo