Re: [OMPI users] latest Intel CPU bug

2018-01-05 Thread Matthieu Brucher
Hi, I think, on the contrary, that he did notice the AMD/ARM issue. I suppose you haven't read the text (and I like the fact that there are different opinions on this issue). Matthieu 2018-01-05 8:23 GMT+01:00 Gilles Gouaillardet : > John, > > > The technical assessment so

Re: [OMPI users] OMPI users] Still "illegal instruction"

2016-09-15 Thread Matthieu Brucher
I don't think there is anything OpenMPI can do for you here. The issue is clearly on how you are compiling your application. To start, you can try to compile without the --march=generic and use something as generic as possible (i.e. only SSE2). Then if this doesn't work for your app, do the same

Re: [OMPI users] Fwd: Can I just use non-blocking send/receive without calling MPI_Wait ever

2015-04-03 Thread Matthieu Brucher
If you don't need to know if the data was transferred or not, then why do you transfer it in the first place? The schema seems kind of strange, as you don't have any clue that the data was actually transferred. Actually without Wait and Test, you can pretty much assume you don't transfer anything.

Re: [OMPI users] Can I just use non-blocking send/receive without calling MPI_Wait ever

2015-04-03 Thread Matthieu Brucher
Hi, I think you have to call either Wait or Test to make the communications move forward in the general case. Some hardware may have a hardware thread that makes the communication, but usually you have to make it "advance" yourself by either calling Wait ot Test. Cheers, Matthieu 2015-04-03

Re: [OMPI users] MPI_Isend with no recieve

2014-07-16 Thread Matthieu Brucher
fer? Can I just > set the request to MPI_SUCCESS for ranks which I will send zero buffer to > and have no receive call? > Is there any other MPI routine that can do MPI_Scatterv on selected ranks? > without creating a new communicator? > > > > > On Wed, Jul 16, 20

Re: [OMPI users] MPI_Isend with no recieve

2014-07-16 Thread Matthieu Brucher
; On Wed, Jul 16, 2014 at 3:28 PM, Matthieu Brucher > <matthieu.bruc...@gmail.com> wrote: >> >> Hi, >> >> The easiest would also to bypass the Isend as well! The standard is >> clear, you need a pair of Isend/Irecv. >> >> Cheers, >> >> 2014-07-1

Re: [OMPI users] MPI_Isend with no recieve

2014-07-16 Thread Matthieu Brucher
Hi, The easiest would also to bypass the Isend as well! The standard is clear, you need a pair of Isend/Irecv. Cheers, 2014-07-16 14:27 GMT+01:00 Ziv Aginsky : > I have a loop in which I will do some MPI_Isend. According to the MPI > standard, for every send you need a

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Matthieu Brucher
A simple test would be to run it with valgrind, so that out of bound reads and writes will be obvious. Cheers, Matthieu 2014-05-08 21:16 GMT+02:00 Spenser Gilliland : > George & Mattheiu, > >> The Alltoall should only return when all data is sent and received on >> the

Re: [OMPI users] MPI_Alltoall with Vector Datatype

2014-05-08 Thread Matthieu Brucher
The Alltoall should only return when all data is sent and received on the current rank, so there shouldn't be any race condition. Cheers, Matthieu 2014-05-08 15:53 GMT+02:00 Spenser Gilliland : > George & other list members, > > I think I may have a race condition in

Re: [OMPI users] performance of MPI_Iallgatherv

2014-04-08 Thread Matthieu Brucher
ied MPI_Waitall(), but the results are > the same. It seems the communication didn't overlap with computation. > > Regards, > Zehan > > On 4/5/14, Matthieu Brucher <matthieu.bruc...@gmail.com> wrote: >> Hi, >> >> Try waiting on all gathers at the same time,

Re: [OMPI users] performance of MPI_Iallgatherv

2014-04-05 Thread Matthieu Brucher
Hi, Try waiting on all gathers at the same time, not one by one (this is what non blocking collectives are made for!) Cheers, Matthieu 2014-04-05 10:35 GMT+01:00 Zehan Cui : > Hi, > > I'm testing the non-blocking collective of OpenMPI-1.8. > > I have two nodes with

Re: [OMPI users] Segmentation fault in MPI_Init when passing pointers allocated in main()

2013-11-12 Thread Matthieu Brucher
It seems that argv[argc] should always be NULL according to the standard. So OMPI failure is not actually a bug! Cheers, 2013/11/12 Matthieu Brucher <matthieu.bruc...@gmail.com>: > Interestingly enough, in ompi_mpi_init, opal_argv_join is called > without then array length,

Re: [OMPI users] Segmentation fault in MPI_Init when passing pointers allocated in main()

2013-11-12 Thread Matthieu Brucher
that the fault occured at MPI_Init. The code works fine if I use > MPI_Init(NULL,NULL) instead. The same code also compiles and runs without a > problem on my laptop with mpich2-1.4. > > Best, > Yu-Hang > > > > On Tue, Nov 12, 2013 at 11:18 AM, Matthieu Bruch

Re: [OMPI users] Segmentation fault in MPI_Init when passing pointers allocated in main()

2013-11-12 Thread Matthieu Brucher
Hi, Are you sure this is the correct code? This seems strange and not a good idea: MPI_Init(,); // do something... for( int i = 0 ; i < argc ; i++ ) delete [] argv[i]; delete [] argv; Did you mean argc_new and argv_new instead? Do you have the same error without CUDA? Cheers,

Re: [OMPI users] Segmentation fault with fresh compilation of 1.7.2

2013-09-19 Thread Matthieu Brucher
Hi, I tried with the latest nightly (well now it may not be the latest anymore), and orte-info didn't crash. So I'll try again later with my app. thanks, Matthieu 2013/9/15 Matthieu Brucher <matthieu.bruc...@gmail.com>: > I can try later this week, yes. > Thanks > > Le 1

Re: [OMPI users] Segmentation fault with fresh compilation of 1.7.2

2013-09-15 Thread Matthieu Brucher
e releasing 1.7.3 shortly and it is mostly complete at this time. > > > On Sep 15, 2013, at 10:43 AM, Matthieu Brucher <matthieu.bruc...@gmail.com> > wrote: > > Yes, ompi_info does not crash. > Le 15 sept. 2013 18:05, "Ralph Castain" <r...@open-mpi.org> a éc

Re: [OMPI users] Segmentation fault with fresh compilation of 1.7.2

2013-09-15 Thread Matthieu Brucher
Yes, ompi_info does not crash. Le 15 sept. 2013 18:05, "Ralph Castain" <r...@open-mpi.org> a écrit : > No - out of curiosity, does ompi_info work? I'm wondering if this is > strictly an orte-info problem. > > On Sep 15, 2013, at 10:03 AM, Matthieu Brucher <matt

Re: [OMPI users] Segmentation fault with fresh compilation of 1.7.2

2013-09-15 Thread Matthieu Brucher
; > On Sep 12, 2013, at 3:17 AM, Matthieu Brucher <matthieu.bruc...@gmail.com> > wrote: > > > Hi, > > > > I compiled OpenMPI on a RHEL6 box with LSF support, but when I run > > sonthing, it crashes. Also orte-info crashes: > > > > Pac

[OMPI users] Segmentation fault with fresh compilation of 1.7.2

2013-09-12 Thread Matthieu Brucher
Hi, I compiled OpenMPI on a RHEL6 box with LSF support, but when I run sonthing, it crashes. Also orte-info crashes: Package: Open MPI mbruc...@xxx.com Distribution Open RTE: 1.7.2 Open RTE repo revision: r28673 Open RTE release date: Jun 26, 2013

[OMPI users] Typo on the FAQ page

2013-09-12 Thread Matthieu Brucher
Hi, I saw a typo on the FAQ page http://www.open-mpi.org/faq/?category=mpi-apps. It says that the variable to change the CXX compiler is OMPI_MPIXX, but it is OMPI_MPICXX (a C is missing). Cheers, -- Information System Engineer, Ph.D. Blog: http://matt.eifelle.com LinkedIn:

Re: [OMPI users] mpirun (Aborted) error

2013-02-24 Thread Matthieu Brucher
Hi, This may be because you have an error in the parallel communication pattern. Without other information, it is difficult to say something else. Try degugging your application. Matthieu 2013/2/24, Mohammad Mohsenie : > Dear All, > Greetings, > > I have installed openmpi

Re: [OMPI users] starting open-mpi

2012-05-18 Thread Matthieu Brucher
Hi, You need to use the command prompt provided by Visual Studio and it will work. Matthieu 2012/5/18 Ghobad Zarrinchian > Hi. I've installed Visual Studio 2008 on my machine. But i have still the > same problem. How can i solve it? thx > > > On Fri, May 11, 2012 at

Re: [OMPI users] over-subscription of cores

2011-12-26 Thread Matthieu Brucher
. Just my opinion. Matthieu Brucher 2011/12/23 Santosh Ansumali <ansum...@gmail.com> > Dear All, >We are running a PDE solver which is memory bound. Due to > cache related issue, smaller number of grid point per core leads to > better performance for this code. Thus

Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?

2011-12-14 Thread Matthieu Brucher
each other. This is what MPI_Init is used for. Matthieu Brucher 2011/12/14 Dmitry N. Mikushin <maemar...@gmail.com> > Dear colleagues, > > For GPU Winter School powered by Moscow State University cluster > "Lomonosov", the OpenMPI 1.7 was built to test and populariz

Re: [OMPI users] Running OpenMPI on SGI Altix with 4096 cores: very poor performance

2010-12-21 Thread Matthieu Brucher
Don't forget that MPT has some optimizations OpenMPI may not have, as "overriding" free(). This way, MPT can have a huge performance boost if you're allocating and freeing memory, and the same happens if you communicate often. Matthieu 2010/12/21 Gilbert Grosdidier :

Re: [OMPI users] Open MPI task scheduler

2010-06-21 Thread Matthieu Brucher
2010/6/21 Jack Bryan : > Hi, > thank you very much for your help. > What is the meaning of " must find a system so that every task can be > serialized in the same form." What is the meaning of "serize " ? Serialize is the process of converting an object instance into a

Re: [OMPI users] Open MPI task scheduler

2010-06-20 Thread Matthieu Brucher
2010/6/20 Jack Bryan : > Hi, Matthieu: > Thanks for your help. > Most of your ideas show that what I want to do. > My scheduler should be able to be called from any C++ program, which can > put > a list of tasks to the scheduler and then the scheduler distributes the >

Re: [OMPI users] Open MPI task scheduler

2010-06-20 Thread Matthieu Brucher
Hi Jack, What you are seeking is the client/server pattern. Have one node act as a server. It will create a list of tasks or even a graph of tasks if you have dependencies, and then create clients that will connect to the server with an RPC protocol (I've done this with a SOAP+TCP protocol, the

Re: [OMPI users] profile the performance of a MPI code: how much traffic is being generated?

2009-09-29 Thread Matthieu Brucher
Hi, You can try MPE (free) or Vampir (not free, but can be integrated inside OpenMPI). Matthieu 2009/9/29 Rahul Nabar : > I have a code that seems to run about 40% faster when I bond together > twin eth interfaces. The question, of course, arises: is it really > producing so

Re: [OMPI users] bin/orted: Command not found.

2009-08-08 Thread Matthieu Brucher
Strange that it indicates the whole path. I had the same issue, but it only said that orted couldn't be found. In my .bashrc, I put what it needed to get orted in my PATH, and it worked. Matthieu 2009/8/8 Ralph Castain : > Not that I know of - I don't think we currently have

Re: [OMPI users] MPI and C++ (Boost)

2009-07-07 Thread Matthieu Brucher
> IF boost is attached to MPI 3 (or whatever), AND it becomes part of the > mainstream MPI implementations, THEN you can have the discussion again. Hi, At the moment, I think that Boost.MPI only supports MPI1.1, and even then, some additional work may be done, at least regarding the complex

Re: [OMPI users] New warning messages in 1.3.2 (not present in1.2.8)

2009-05-12 Thread Matthieu Brucher
Thank you a lot for this. I've just checked everything again, recompiled my code as well (I'm using SCons so it detects that the headers and the libraries changed) and it works without a warning. Matthieu 2009/5/12 Jeff Squyres <jsquy...@cisco.com>: > On May 12, 2009, at 8:17 AM,

Re: [OMPI users] New warning messages in 1.3.2 (not present in1.2.8)

2009-05-12 Thread Matthieu Brucher
2009/5/12 Jeff Squyres : > Or it could be that you installed 1.3.2 over 1.2.8 -- some of the 1.2.8 > components that no longer exist in the 1.3 series are still in the > installation tree, but failed to open properly (unfortunately, libltdl gives > an incorrect "file not found"

[OMPI users] New warning messages in 1.3.2 (not present in 1.2.8)

2009-05-12 Thread Matthieu Brucher
Hi, I've managed to use 1.3.2 (still not with LSF and InfiniPath, I start one step after another), but I have additional warnings that didn't show up in 1.2.8: [host-b:09180] mca: base: component_find: unable to open /home/brucher/lib/openmpi/mca_ras_dash_host: file not found (ignored)

Re: [OMPI users] LSF launch with OpenMPI

2009-05-07 Thread Matthieu Brucher
of setting the necessary environment variables and > eventually calls the correct mpirun. (the option "-a openmpi" tells LSF that > we're using OpenMPI so don't try to autodetect) > > > > Regards, > > > > Jeroen Kleijer > > On Tue, May 5, 2009 at 2:23 PM,

Re: [OMPI users] LSF launch with OpenMPI

2009-05-06 Thread Matthieu Brucher
2009/5/6 Jeff Squyres <jsquy...@cisco.com>: > On May 5, 2009, at 10:01 AM, Matthieu Brucher wrote: > >> > What Terry said is correct.  It means that "mpirun" will use, under the >> > covers, the "native" launching mechanism of LSF to launch

Re: [OMPI users] LSF launch with OpenMPI

2009-05-05 Thread Matthieu Brucher
2009/5/5 Jeff Squyres <jsquy...@cisco.com>: > On May 5, 2009, at 6:10 AM, Matthieu Brucher wrote: > >> The first is what the support of LSF by OpenMPI means. When mpirun is >> executed, it is an LSF job that is actually ran? Or what does it >> imply? I've tried to

[OMPI users] LSF launch with OpenMPI

2009-05-05 Thread Matthieu Brucher
. My second question is about the LSF detection. lsf.h is detected, but when lsb_launch is searched for ion libbat.so, it fails because parse_time and parse_time_ex are not found. Is there a way to add additional lsf libraries so that the search can be done? Matthieu Brucher -- Information System