Just to elaborate: as the error message implies, this error message was put
there specifically to ensure that the Fortran compiler works before continuing
any further. If the Fortran compiler is busted, configure exits with this help
message.
You can either fix your Fortran compiler, or use
On Aug 20, 2014, at 3:37 AM, Zhang,Lei(Ecom) wrote:
> I have a performance problem with receiving. In a single master thread, I
> made several Irecv calls:
>
> Irecv(buf1, ..., tag, ANY_SOURCE, COMM_WORLD)
> Irecv(buf2, ..., tag, ANY_SOURCE, COMM_WORLD)
> ...
>
Have you tried moving your shared memory backing file directory, like the
warning message suggests?
I haven't seen a shared memory file on a network share cause correctness issues
before (just performance issues), but I could see how that could be in the
realm of possibility...
Also, are you
014-08-15 17:50, Jeff Squyres (jsquyres) a écrit :
>> On Aug 15, 2014, at 5:39 PM, Maxime Boissonneault
>> <maxime.boissonnea...@calculquebec.ca> wrote:
>>
>>> Correct.
>>>
>>> Can it be because torque (pbs_mom) is not running on the head no
On Aug 15, 2014, at 5:39 PM, Maxime Boissonneault
wrote:
> Correct.
>
> Can it be because torque (pbs_mom) is not running on the head node and
> mpiexec attempts to contact it ?
Not for Open MPI's mpiexec, no.
Open MPI's mpiexec (mpirun -- they're the
On Aug 14, 2014, at 5:52 AM, Christoph Niethammer wrote:
> I just gave gcc 4.9.0 a try and the mpi_f09 module
Wow -- that must be 1 better than the mpi_f08 module!
:-p
> is there but it seems to miss some functions:
>
> mpifort test.f90
> /tmp/ccHCEbXC.o: In function
Can you try the latest 1.8.2 rc tarball? (just released yesterday)
http://www.open-mpi.org/software/ompi/v1.8/
On Aug 14, 2014, at 8:39 AM, Maxime Boissonneault
wrote:
> Hi,
> I compiled Charm++ 6.6.0rc3 using
> ./build charm++ mpi-linux-x86_64 smp
I don't know much about OpenMP, but do you need to disable Open MPI's default
bind-to-core functionality (I'm assuming you're using Open MPI 1.8.x)?
You can try "mpirun --bind-to none ...", which will have Open MPI not bind MPI
processes to cores, which might allow OpenMP to think that it can
Marcus --
The fix was applied yesterday to the v1.8 branch. Would you mind testing the
v1.8 nightly tarball from last night, just to make sure it works for you?
http://www.open-mpi.org/nightly/v1.8/
On Aug 12, 2014, at 2:54 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
8.2
(the freeze has already occurred).
On Aug 12, 2014, at 12:32 PM, Daniels, Marcus G <mdani...@lanl.gov> wrote:
> Hi Jeff,
>
> On Tue, 2014-08-12 at 16:18 +, Jeff Squyres (jsquyres) wrote:
>> Can you send the output from configure, the config.log file, and
I filed the following ticket:
https://svn.open-mpi.org/trac/ompi/ticket/4857
On Aug 12, 2014, at 12:39 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
wrote:
> (please keep the users list CC'ed)
>
> We talked about this on the weekly engineering call today. Ralph has
<tismagi...@mail.ru> wrote:
> I don't have this error in OMPI 1.9a1r32252 and OMPI 1.8.1 (with --mca
> oob_tcp_if_include ib0), but in all latest night snapshots i got this error.
>
>
> Tue, 12 Aug 2014 13:08:12 +0000 от "Jeff Squyres (jsquyres)"
> <jsquy...@c
Can you send the output from configure, the config.log file, and the
ompi_config.h file?
On Aug 12, 2014, at 12:11 PM, Daniels, Marcus G <mdani...@lanl.gov> wrote:
> On Tue, 2014-08-12 at 15:50 +0000, Jeff Squyres (jsquyres) wrote:
>> It should be in the 1.8.2rc tarball (i.e.,
It should be in the 1.8.2rc tarball (i.e., to be included in the
soon-to-be-released 1.8.2).
Want to give it a whirl before release to let us know if it works for you?
http://www.open-mpi.org/software/ompi/v1.8/
On Aug 12, 2014, at 11:44 AM, Daniels, Marcus G wrote:
I filed https://svn.open-mpi.org/trac/ompi/ticket/4856 to apply these ROMIO
patches.
Probably won't happen until 1.8.3.
On Aug 6, 2014, at 2:54 PM, Rob Latham wrote:
>
>
> On 08/06/2014 11:50 AM, Mohamad Chaarawi wrote:
>
>> To replicate, run the program with 2 or more
The quick and dirty answer is that in the v1.8 series, Open MPI started binding
MPI processes to cores by default.
When you run 2 independent jobs on the same machine in the way in which you
described, the two jobs won't have knowledge of each other, and therefore they
will both starting
Are you running any kind of firewall on the node where mpirun is invoked? Open
MPI needs to be able to use arbitrary TCP ports between the servers on which it
runs.
This second mail seems to imply a bug in OMPI's oob_tcp_if_include param
handling, however -- it's supposed to be able to handle
Lenny --
Since this is about the development trunk, how about sending these kinds of
mails to the de...@open-mpi.org list? Not all the OMPI developers are on the
user's list, and this very definitely is not a user-level question.
On Aug 12, 2014, at 6:12 AM, Lenny Verkhovsky
ams.
> If you meant to cross compile, use `--host'.
> See `config.log' for more details
>
> Your help is really appreciated!
>
>
> David
>
> Correspondence/TSPA
>
>
>
>
>
> On Aug 11, 2014, at 7:32 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
Then it sounds like OS X wiped out your Open MPI install. It's probably safe
to re-install.
On Aug 11, 2014, at 11:09 AM, Yang, David wrote:
> Doug,
>
> I tried it and didn’t find anything. Thanks for the suggestion, though.
>
>
> David
>
> Correspondence/TSPA
>
>
>
>
The problem appears to be occurring in the hwloc component in OMPI. Can you
download hwloc 1.7.2 (standalone) and try to build that on the target machine
and see what happens?
http://www.open-mpi.org/software/hwloc/v1.7/
On Aug 10, 2014, at 11:16 AM, Jorge D'Elia
This usually indicates an error with the compiler on your machine.
As Ralph implied, this may indicate that you don't have Xcode installed (and
therefore don't have a compiler).
You can look in config.log to be sure, or send it here (compress first,
please), and we'll let you know.
On Aug
On Aug 8, 2014, at 1:24 AM, Lane, William wrote:
> Using the "--mca btl tcp,self" switch to mpirun solved all the issues (in
> addition to
> the requirement to include the --mca btl_tcp_if_include eth0 switch). I
> believe
> the "--mca btl tcp,self" switch limits
Can you try upgrading? 1.6.x is super old. 1.8.1 is the current stable
release.
On Aug 7, 2014, at 11:16 AM, Jane Lewis wrote:
> Hi all,
>
> This is a really simple problem (I hope) where I’ve introduced MPI to a
> complex numerical model which I have to kill
On Aug 5, 2014, at 1:13 PM, Dan Shell wrote:
> Need to use mpif.h for my application
> I will get the newest versions of gcc and gfortran
You should be fine without the newest versions of gcc/gfortran. The older
gcc/gfortran will support mpif.h and a limited form of
On Aug 5, 2014, at 10:57 AM, Dan Shell wrote:
> Should I look for a newer version of gfortran?
> I saw from the config fortran compile section that mpi_f08 was not compiled
It depends on what your MPI application needs. MPI (i.e., the spec, not Open
MPI) defines
Dan --
The version of gfortran that you have does not support the mpi_f08 bindings.
Note that:
- newer versions of gfortran will support the mpi_f08 bindings
- all versions of gfortran support the mpi module and the mpif.h Fortran
bindings
See the README:
--
The following notes apply
On Jul 27, 2014, at 3:39 PM, Dan Shell wrote:
> I have been looking at the openmpi doc page and would like some pointers on
> how to implement the wrapper.txt file with mpifort.
I'm not sure what you're asking here...?
> I have the wrapper .txt file how does mpifort
That's quite odd that it only happens for Java programs -- it should happen for
*all* programs, based on the stack trace you've shown.
Can you print the value of the lds struct where the error occurs?
On Jul 25, 2014, at 2:29 AM, Siegmar Gross
wrote:
>
Hyperthreading is pretty great for non-HPC applications, which is why Intel
makes it. But hyperthreading *generally* does not help HPC application
performance. You're basically halving several on-chip resources / queues /
pipelines, and that can hurt for performance-hungry HPC applications.
Can you try upgrading to OMPI 1.6.5? 1.6.5 has *many* bug fixes compared to
1.5.4.
A little background...
Open MPI is developed in terms of release version pairs:
"1.odd" are feature releases. We add new (and remove old) features, etc. We
do a lot of testing, but this is all done in
You might well be able to:
mpirun --mca btl ^openib,udapl ...
Which excludes both openib and udapl (both of which used the same librdmacm).
If this doesn't solve the problem, then please send the info Ralph asked for,
and we'll dig deeper...
On Jun 27, 2014, at 3:41 PM, Ralph Castain
Just curious -- if you run standard ping-pong kinds of MPI benchmarks with the
same kind of mpirun command line that you run your application, do you see the
expected level of performance? (i.e., verification that you're using the low
latency transport, etc.)
On Jun 25, 2014, at 9:52 AM,
Sounds like you have a problem with the physical layer of your InfiniBand. You
should run layer 0 diagnostics and/or contact your IB vendor for assistance.
On Jun 24, 2014, at 4:48 AM, Diego Saúl Carrió Carrió
wrote:
> Dear all,
>
> I have problems for a long time
This doesn't sound like a linking problem; this sounds like there's an error in
your application that is causing it to abort before completing.
On Jun 25, 2014, at 12:19 PM, Sergii Veremieiev wrote:
> Dear Sir/Madam,
>
> I'm trying to run a parallel finite element
Brock --
Can you run with "ompi_info --all"?
With "--param all all", ompi_info in v1.8.x is defaulting to only showing level
1 MCA params. It's showing you all possible components and variables, but only
level 1.
Or you could also use "--level 9" to show all 9 levels. Here's the relevant
I'll let Nathan/others comment on the correctness of your program.
What version of Open MPI are you using? Be sure to use the latest to get the
most correct one-sided implementation.
Also, as one of the prior LAM/MPI developers, I must plead with you to stop
using LAM/MPI. We abandoned it
Open MPI is distributed under the modified BSD license. Here’s a link to the
v1.8 LICENSE file:
https://svn.open-mpi.org/trac/ompi/browser/branches/v1.8/LICENSE
As long as you abide by the terms of that license, you are fine.
On Jun 17, 2014, at 4:41 AM, Victor Vysotskiy
64 9 923 2194 (internal ext: 82194)
>>> w: www.jaison.me
>>>
>>>
>>>
>>>
>>>
>>> On 11/06/2014, at 11:19 am, Ralph Castain <r...@open-mpi.org> wrote:
>>>
>>>> I had a chance to think about this some more,
t; >restarting pbs_mom." ) doesn't work, try using the RDMACM CPC (instead
>> > of
>> >UDCM, which is a pretty recent addition to the openIB BTL.) by setting:
>> >
>> >-mca btl_openib_cpc_include rdmacm
>> >
>> >Josh
>
ischer, Greg A. wrote:
>>> Jeff/Nathan,
>>>
>>> I ran the following with my debug build of OpenMPI 1.8.1 - after opening a
>>> terminal on a compute node with "qsub -l nodes 2 -I":
>>>
>>> mpirun -mca btl openib,self -mca btl_base_v
On Jun 10, 2014, at 7:19 PM, Ralph Castain wrote:
> I had a chance to think about this some more, and I'm wondering if this falls
> into some new category of "member". Personally, I would really welcome
> getting the MTT results from another site as it would expand coverage
Greg:
Can you run with "--mca btl_base_verbose 100" on your debug build so that we
can get some additional output to see why UDCM is failing to setup properly?
On Jun 10, 2014, at 10:25 AM, Nathan Hjelm <hje...@lanl.gov> wrote:
> On Tue, Jun 10, 2014 at 12:10:28AM
Oops. Looks like we missed these in the Fortran interfaces.
I'll file a bug; we'll get this fixed in OMPI 1.8.2. Many thanks for reporting
this.
On Jun 5, 2014, at 5:41 AM, michael.rach...@dlr.de wrote:
> Dear developers of OpenMPI,
>
> I found that when building an executable from a
I'm digging out from mail backlog from being at the MPI Forum last week...
Yes, from looking at the stack traces, it's segv'ing inside the memory
allocator, which typically means some other memory error occurred before this.
I.e., this particular segv is a symptom of the problem, not the
I seem to recall that you have an IB-based cluster, right?
>From a *very quick* glance at the code, it looks like this might be a simple
>incorrect-finalization issue. That is:
- you run the job on a single server
- openib disqualifies itself because you're running on a single server
- openib
On Jun 9, 2014, at 7:00 PM, Vineet Rawat wrote:
> We actually do ship the /share and /etc directories. We set
> OPAL_PREFIX to a sub-directory of our installation and make sure those things
> are in our PATH/LD_LIBRARY_PATH.
>
> I can try adding the additional shared
On Jun 9, 2014, at 6:36 PM, Vineet Rawat wrote:
> No, we only included what seemed necessary (from ldd output and experience on
> other clusters). The only things in my /lib/openmpi are
> libompi_dbg_msgq*. Is that what you're referring to? In /lib for
> 12.8.1
On Jun 9, 2014, at 5:41 PM, Vineet Rawat wrote:
> We've deployed OpenMPI on a small cluster but get a SEGV in orted. Debug
> information is very limited as the cluster is at a remote customer site. They
> have a network card with which I'm not familiar (Cisco Systems
George and I are together at the MPI Forum this week -- we just looked at this
in more detail; it looks like this is a more pervasive problem.
Let us look at this a bit more...
On Jun 5, 2014, at 10:37 AM, George Bosilca wrote:
> Alan,
>
> I think we forgot to cleanup
Ok. I think most were fixed after you reported them last year, but a few new
MPI-3 functions were added after that, and they accidentally had "ierr" instead
of "ierror".
On Jun 3, 2014, at 11:47 AM, W Spector wrote:
> Jeff Squyres wrote:
> > Did you find any other places
gt;
>>>> Last, the use-mpi-ignore-tkr directory:
>>>>
>>>> cd ../use-mpi-ignore-tkr
>>>> ls -1 mpi*.in | xargs -i -t ex -c ":1,\$s?ierr?ierror?" -c ":wq" {}
>>>>
>>>> As you can tell from the below, I neede
These messages are normal for RPM.
Keep in mind that you're installing the source RPM -- not a binary RPM. Most
people use the source RPM in an rpmbuild command (to build a binary RPM for
their environment), not installing directly.
On May 30, 2014, at 7:43 AM, Fernando Cruz
I think Ralph and Gilles covered most everything, just let me emphasize a few
things:
- MTT is simply an engine for running tests and gathering results. Nothing
more.
- MTT actually only includes a trivial test suite for MPI ("hello world" and
"send a message around in a ring").
- In the Open
Your configure statement looks fine (note that you don't need the F77=ifort
token, but it's harmless -- the FC=ifort token is the important one).
Can you send all the information listed here:
http://www.open-mpi.org/community/help/
On May 28, 2014, at 2:15 AM, Lorenzo Donà
I am sorry for the delay in replying; this week got a bit crazy on me.
I'm guessing that Open MPI is striping across both your eth0 and ib0 interfaces.
You can limit which interfaces it uses with the btl_tcp_if_include MCA param.
For example:
# Just use eth0
mpirun --mca btl
Sorry to jump in late on this thread, but here's my thoughts:
1. Your initial email said "threads", not "processes". I assume you actually
meant "processes" (having multiple threads calls MPI_FINALIZE is erroneous).
2. Periodically over the years, we have gotten the infrequent request to
Would a better solution be something like:
char default_credential[8] = "12345";
char *bar = strdup(default_credential)
?
On May 22, 2014, at 12:52 AM, George Bosilca wrote:
> This is more subtle that described here. It's a vectorization problem
> and frankly it should
Can you send the output of ifconfig on both compute-0-15.local and
compute-0-16.local?
On May 22, 2014, at 3:30 AM, Bibrak Qamar wrote:
> Hi,
>
> I am facing problem in running Open MPI using TCP (on 1G Ethernet). In
> practice the bandwidth must not exceed 1000 Mbps but
We handled this in IM. Nvidia is all setup.
On May 21, 2014, at 3:11 PM, Rolf vandeVaart wrote:
> Can I get a username/password for submitting mtt results?
>
> username=nvidia
> ---
> This
Ditto -- Lmod looks pretty cool. Thanks for the heads up.
On May 16, 2014, at 6:23 PM, Douglas L Reeder wrote:
> Maxime,
>
> I was unaware of Lmod. Thanks for bringing it to my attention.
>
> Doug
> On May 16, 2014, at 4:07 PM, Maxime Boissonneault
>
On May 15, 2014, at 8:00 PM, Fabricio Cannini wrote:
>> Nobody is disagreeing that one could find a way to make CMake work - all we
>> are saying is that (a) CMake has issues too, just like autotools, and (b) we
>> have yet to see a compelling reason to undertake the
On May 15, 2014, at 6:14 PM, Fabricio Cannini wrote:
> Alright, but now I'm curious as to why you decided against it.
> Could please elaborate on it a bit ?
OMPI has a long, deep history with the GNU Autotools. It's a very long,
complicated story, but the high points are:
This bug should be fixed in tonight's tarball, BTW.
On May 15, 2014, at 9:19 AM, Ralph Castain wrote:
> It is an unrelated bug introduced by a different commit - causing mpirun to
> segfault upon termination. The fact that you got the hostname to run
> indicates that this
These are all good points -- thanks for the feedback.
Just to be clear: my point about the menu system was to generate file that
could be used for subsequent installs, very specifically targeted at those who
want/need scriptable installations.
One possible scenario could be: you download
;>> use. So we wind up building a bunch of useless modules.
>>>>
>>>>
>>>> On May 14, 2014, at 3:09 PM, Ralph Castain <r...@open-mpi.org> wrote:
>>>>
>>>>> FWIW: I believe we no longer build the slurm support by default
On May 14, 2014, at 6:09 PM, Ralph Castain wrote:
> FWIW: I believe we no longer build the slurm support by default, though I'd
> have to check to be sure. The intent is definitely not to do so.
The srun-based support builds by default. I like it that way. :-)
PMI-based
Here's a bit of our rational, from the README file:
Note that for many of Open MPI's --with- options, Open MPI will,
by default, search for header files and/or libraries for . If
the relevant files are found, Open MPI will built support for ;
if they are not found, Open MPI will
On May 9, 2014, at 8:34 PM, Spenser Gilliland wrote:
> Thanks for the quick response. I'm having alot of fun learning MPI and this
> mailing list has been invaluable.
>
> So, If I do a scatter on an inter communicator will this use all left
> process to scatter
On May 9, 2014, at 7:56 PM, Spenser Gilliland wrote:
> I'm having some trouble understanding Intercommunicators with
> Collective Communication. Is there a collective routine to express a
> transfer from all left process to all right processes? or vice versa?
The
On May 7, 2014, at 4:10 PM, Richard Shaw wrote:
> Thanks Rob. I'll keep track of it over there. How often do updated versions
> of ROMIO get pulled over from MPICH into OpenMPI?
"Periodically".
Hopefully, the fix will be small and we can just pull that one fix down to
Are you using TCP as the MPI transport?
If so, another thing to try is to limit the IP interfaces that MPI uses for its
traffic to see if there's some kind of problem with specific networks.
For example:
mpirun --mca btl_tcp_if_include eth0 ...
If that works, then try adding in any/all
On May 6, 2014, at 9:40 AM, Imran Ali wrote:
> My install was in my user directory (i.e $HOME). I managed to locate the
> source directory and successfully run make uninstall.
FWIW, I usually install Open MPI into its own subdir. E.g.,
On May 6, 2014, at 9:32 AM, Imran Ali wrote:
> I will attempt that than. I read at
>
> http://www.open-mpi.org/faq/?category=building#install-overwrite
>
> that I should completely uninstall my previous version.
Yes, that is best. OR: you can install into a
The thread support in the 1.6 series is not very good. You might try:
- Upgrading to 1.6.5
- Or better yet, upgrading to 1.8.1
On May 6, 2014, at 7:24 AM, Imran Ali wrote:
> I get the following error when I try to run the following python code
>
> import
Gah. I thought we had this Fortran stuff finally correct. :-\
Let me take this off-list and see if there's something still not quite right,
or if the Sun fortran compiler isn't doing something right.
On Apr 30, 2014, at 10:40 AM, Siegmar Gross
wrote:
On Apr 29, 2014, at 4:28 PM, Vince Grimes wrote:
> I realize it is no longer in the history of replies for this message, but the
> reason I am trying to use tcp instead of Infiniband is because:
>
> We are using an in-house program called ScalIT that performs operations on
Brian: Can you report this bug to PGI and see what they say?
On Apr 27, 2014, at 2:15 PM, "Hjelm, Nathan T" wrote:
> I see nothing invalid about that line. It is setting a struct scif_portID
> from another struct scif_portID which is allowed in C99. The error might be
>
In principle, there's nothing wrong with using ib0 interfaces for TCP MPI
communication, but it does raise the question of why you're using TCP when you
have InfiniBand available...?
Aside from that, can you send all the info listed here:
http://www.open-mpi.org/community/help/
On Apr
On Apr 23, 2014, at 4:45 PM, Ross Boylan wrote:
>> is OK. So, if any nonblocking calls are used, one must use mpi.test or
>> mpi.wait to check if they are complete before trying any blocking calls.
That is also correct -- it's MPI semantics (communications initiated by
On Mar 13, 2014, at 3:15 PM, Ross Boylan wrote:
> The motivation was
> http://www.stats.uwo.ca/faculty/yu/Rmpi/changelogs.htm notes
> --
> 2007-10-24, version 0.5-5:
>
> dlopen has been used to load libmpi.so explicitly. This is mainly
A few suggestions:
- Try using Open MPI 1.8.1. It's the newest release, and has many improvements
since the 1.6.x series.
- Try using "--mca btl openib,sm,self" (in both v1.6.x and v1.8.x). This
allows Open MPI to use shared memory to communicate between processes on the
same server, which
See this FAQ entry:
http://www.open-mpi.org/faq/?category=openfabrics#ib-low-reg-mem
On Apr 22, 2014, at 2:38 PM, Amin Hassani wrote:
> When I want to use OpenIB module of OpenMPI (thorugh -mca btl
> sm,self,openib), I keep getting the message that configuration only
Sounds like you're freeing memory that does not belong to you. Or you have
some kind of memory corruption somehow.
On Apr 17, 2014, at 2:01 PM, Oscar Mojica wrote:
> Hello guys
>
> I used the command
>
> ulimit -s unlimited
>
> and got
>
> stack size
Sure; I'll ping you off-list.
On Apr 15, 2014, at 2:03 AM, Gilles Gouaillardet
wrote:
> Dear MTT Folks,
>
> i would like to access to the ompi-test repository.
> my organization (RIST) already signed the Corporate version of the
> OpenMPI Contribution License
On Apr 15, 2014, at 8:35 AM, Marco Atzeri wrote:
> on 64bit 1.7.5,
> as Symantec Endpoint protections, just decided
> that a portion of 32bit MPI is a Trojan...
It's the infamous MPI trojan. We take over your computer and use it to help
cure cancer.
:p
--
Jeff
pe this helps,
> Gus Correa
>
>
> On 04/14/2014 03:11 PM, Djordje Romanic wrote:
> to get help :)
>
>
>
> On Mon, Apr 14, 2014 at 3:11 PM, Djordje Romanic <djord...@gmail.com
> <mailto:djord...@gmail.com>> wrote:
>
> Yes, but I was hoping
Yes, this is a bug. Doh!
Looks like we fixed it for one case, but missed another case. :-(
I've filed https://svn.open-mpi.org/trac/ompi/ticket/4519, and will fix this
shortly.
On Apr 14, 2014, at 4:11 AM, Luis Kornblueh
wrote:
> Dear all,
>
> the
If you didn't use Open MPI, then this is the wrong mailing list for you. :-)
(this is the Open MPI users' support mailing list)
On Apr 14, 2014, at 2:58 PM, Djordje Romanic <djord...@gmail.com> wrote:
> I didn't use OpenMPI.
>
>
> On Mon, Apr 14, 2014 at 2:37 PM, Jeff
This can also happen when you compile your application with one MPI
implementation (e.g., Open MPI), but then mistakenly use the "mpirun" (or
"mpiexec") from a different MPI implementation (e.g., MPICH).
On Apr 14, 2014, at 2:32 PM, Djordje Romanic wrote:
> I compiled it
Sorry for the delay in replying.
Can you try upgrading to Open MPI 1.8, which was released last week? We
refreshed the version of ROMIO that is included in OMPI 1.8 vs. 1.6.
On Apr 8, 2014, at 6:49 PM, Daniel Milroy wrote:
> Hello,
>
> Recently a couple of our
On Apr 9, 2014, at 8:47 PM, Filippo Spiga wrote:
> I haven't solve this yet but I managed to move to code to be compatible woth
> PGI 14.3. Open MPI 1.8 compiles perfectly with the latest PGI.
>
> In parallel I will push this issue to the PGI forum.
FWIW: We've seen
amples.
>
> Thank you,
> Saliya
>
>
> On Tue, Apr 8, 2014 at 9:06 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> If your examples are anything more than trivial code, we'll probably need a
> signed contribution agreement. This is a bit of a hass
In general, benchmarking is very hard.
For example, you almost certainly want to do some "warmup" communications of
the pattern that you're going to measure. This gets all communications setup,
resources allocated, caches warmed up, etc.
That is, there's generally some one-time setup that
8, 2014 at 3:49 AM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> Open MPI 1.4.3 is *ancient*. Please upgrade -- we just released Open MPI 1.8
> last week.
>
> Also, please look at this FAQ entry -- it steps you through a lot of basic
> troubleshooting step
On Apr 7, 2014, at 6:47 PM, "Blosch, Edwin L" wrote:
> Sorry for the confusion. I am not building OpenMPI from the SVN source. I
> downloaded the 1.8 tarball and did configure, and that is what failed. I was
> surprised that it didn't work on a vanilla Redhat
On Mar 30, 2014, at 2:43 PM, W Spector wrote:
> The mpi.mod file that is created from both the openmpi-1.7.4 and
> openmpi-1.8rc1 tarballs does not seem to be generating interface blocks for
> the Fortran API - whether the calls use choice buffers or not.
Can you be a bit
ed but some one else put
> a ticket on 1024 too.
> for this purpose i wasn't able to communicate with other computers.
>
>
>
>
>
> On Mon, Apr 7, 2014 at 9:52 PM, Jeff Squyres (jsquyres) <jsquy...@cisco.com>
> wrote:
> I was out on vacation / fully disco
g mpi to use another.
>
> mpiexec -n 2 --host karp,wirth --mca btl ^openib --mca btl_tcp_if_include br0
> --mca btl_tcp_port_min_v4 1 ./a.out
>
> Thanks again for the nice and effective suggestions.
>
> Regards.
>
>
>
> On Tue, Mar 25, 2014 at 1:27 PM, J
On Mar 28, 2014, at 12:10 PM, Rob Latham wrote:
> I also found a bad memcopy (i was taking the size of a pointer to a thing
> instead of the size of the thing itself), but that only matters if ROMIO uses
> extended generalized requests. I trust ticket #1159 is still
901 - 1000 of 1536 matches
Mail list logo