Hi all -
I'm trying to use jemalloc with my project, but I get a crash in
opal_memory_linux_ptmalloc2_open when jemalloc is linked. If I use
tcmalloc, this does not happen.
Any ideas? Is there a sanctioned way to override malloc libraries in
conjunction with openmpi?
Thanks,
Brian
. Is this still the case in 1.6 and 1.7 series?
Thanks,
Brian
On Mon, Feb 4, 2013 at 9:09 PM, Roland Schulz <rol...@utk.edu> wrote:
>
>
>
> On Mon, Jan 28, 2013 at 9:20 PM, Brian Budge <brian.bu...@gmail.com> wrote:
>>
>> I believe that yes, you have t
You appear to be using new and delete[] together. Instead you should
be using new[] and delete[] and new and delete together.
Brian
On Wed, Jun 12, 2013 at 4:44 PM, Corey Allen
wrote:
> I have done a search on this and I haven't found an explanation. I am not a
Hi all -
I have an application where the master node will spawn slaves to
perform computation (using the singleton Comm_spawn_multiple paradigm
available in OpenMPI) . The master will only decide the work to do,
and also provide data common to all the computations.
The slaves are
I believe that yes, you have to compile enable-mpi-thread-multiple to
get anything other than SINGLE.
Brian
On Tue, Jan 22, 2013 at 12:56 PM, Roland Schulz wrote:
> Hi,
>
> compiling 1.6.1 or 1.6.2 without enable-mpi-thread-multiple returns from
> MPI_Init_thread as provided
Did you build openmpi with multithreading enabled?
If not then that could be the problem.
Brian
On Dec 5, 2012 3:20 AM, "赵印" wrote:
> Hi all,
>
> I have a MPI_Isend/MPI_Recv problem in a multi-thread program.
>
> In the program:
> *The first machine* has one thread
>
> mpirun -mca orte_fork_agent "valgrind " ./my_app
>
> We will execute "valgrind ./my_app" whenever we start one of
> your processes. This includes process launched via comm_spawn.
>
> HTH
> Ralph
>
> On Nov 16, 2012, at 4:38 PM, Brian Budge <br
Thanks very much Ralph. Silly me I thought it might actually be some
effort :)
Brian
On Fri, Nov 16, 2012 at 4:04 PM, Ralph Castain wrote:
> Easiest solution: just add valgrind into the cmd line
>
> mpirun valgrind ./my_app
>
>
> On Nov 16, 2012, at 3:37 PM, "Tom Bryan
Hi all -
I'm using openmpi to spawn child processes in singleton mode. If I use
mpirun, I can just run
> mpirun valgrind myprog
With spawn, it is expected that the spawned process will call
mpi_init(_thread). If I want to run valgrind on my processes, what steps
should be taken? I'm currently
On Tue, Nov 13, 2012 at 1:56 AM, 赵印 wrote:
> I have a problem here.
>
> My program runs perfectly in MPI version 1.6 series, but it would run into
> some problem in MPI version 1.4x series. *Does MPI 1.4x version have a
> bug related in MPI_Recv.*
>
> The log in Node[1] says
h didn't get into the 1.6.2
> official release, and the version numbers in the repo didn't immediately get
> updated - so the nightly build was still labeled as 1.6.2 even after the
> official release came out.
>
>
> On Oct 16, 2012, at 10:46 AM, Brian Budge <brian.bu
Hi all -
There was a bug in version 1.6.1 that caused singleton spawn not to
work correctly with multi-machine configurations. I verified that a
nightly build of 1.6.2 fixed this issue, in particular 1.6.2a1r27234
works. I just grabbed the 1.6.2 official release, and it appears that
somehow the
Hi Ralph -
Is this really true? I've been using thread_multiple in my openmpi
programs for quite some time... There may be known cases where it
will not work, but for vanilla MPI use, it seems good to go. That's
not to say that you can't create your own deadlock if you're not
careful, but they
On Mon, Oct 1, 2012 at 10:33 AM, Ralph Castain wrote:
> Yes, that is the expected behavior as you describe it.
>
> If you want to run on hosts that are not already provided (via hostfile in
> the environment or on the command line), then you need to use the "add-host"
> or
On Wed, Sep 12, 2012 at 10:23 AM, Ralph Castain <r...@open-mpi.org> wrote:
>
> On Sep 12, 2012, at 9:55 AM, Brian Budge <brian.bu...@gmail.com> wrote:
>
>> On Wed, Aug 17, 2011 at 12:05 AM, Simone Pellegrini
>> <spellegr...@dps.uibk.ac.at> wrote:
>>
On Tue, Sep 18, 2012 at 2:14 PM, Alidoust wrote:
>
> Dear Madam/Sir,
>
>
> I have a serial Fortran code (f90), dealing with matrix diagonalizing
> subroutines, and recently got its parallel version to be faster in some
> unfeasible parts via the serial program.
> I have
On Wed, Aug 17, 2011 at 12:05 AM, Simone Pellegrini
wrote:
> On 08/16/2011 11:15 PM, Ralph Castain wrote:
>>
>> I'm not finding a bug - the code looks clean. If I send you a patch, could
>> you apply it, rebuild, and send me the resulting debug output?
>
> yes, I could
e (singleton comm_spawn is so rarely
> used that it can easily be overlooked for some time).
>
> Thx
> Ralph
>
>
>
>
> On Aug 31, 2012, at 3:32 PM, Brian Budge <brian.bu...@gmail.com> wrote:
>
>> Thanks, much appreciated.
>>
>> On Fri, Aug 31, 2
lease is issued, if that helps.
>
>
> On Aug 31, 2012, at 2:33 PM, Brian Budge <brian.bu...@gmail.com> wrote:
>
>> Hi Ralph -
>>
>> This is true, but we may not know until well into the process whether
>> we need MPI at all. We have an SMP/NUMA mode that is designed
aemon running the job - only difference is in the number of
> characters the user types to start it.
>
>
> On Aug 30, 2012, at 8:44 AM, Brian Budge <brian.bu...@gmail.com> wrote:
>
>> In the event that I need to get this up-and-running soon (I do need
>> somethin
Hi all -
I'm writing a program which will start in a single process. This
program will call init (THREAD_MULTIPLE), and finalize. In between,
it will call spawn an unknown number of times (think of the program as
a daemon that launches jobs over and over again).
I'm running a simple example
In the event that I need to get this up-and-running soon (I do need
something working within 2 weeks), can you recommend an older version
where this is expected to work?
Thanks,
Brian
On Tue, Aug 28, 2012 at 4:58 PM, Brian Budge <brian.bu...@gmail.com> wrote:
> Thanks!
>
> On Tu
Thanks!
On Tue, Aug 28, 2012 at 4:57 PM, Ralph Castain <r...@open-mpi.org> wrote:
> Yeah, I'm seeing the hang as well when running across multiple machines. Let
> me dig a little and get this fixed.
>
> Thanks
> Ralph
>
> On Aug 28, 2012, at 4:51 PM, Brian Budge &l
e set - does it work okay?
>
> It works fine for me, hence the question.
>
> Also, what OMPI version are you using?
>
> On Aug 28, 2012, at 4:25 PM, Brian Budge <brian.bu...@gmail.com> wrote:
>
>> I see. Okay. So, I just tried removing the check for universe siz
(not the original singleton!) reads the
> hostfile to find out how many nodes are around, and then does the launch.
>
> You are trying to check the number of nodes from within the singleton, which
> won't work - it has no way of discovering that info.
>
>
>
>
> On Aug 28,
>echo hostsfile
localhost
budgeb-sandybridge
Thanks,
Brian
On Tue, Aug 28, 2012 at 2:36 PM, Ralph Castain <r...@open-mpi.org> wrote:
> Hmmm...what is in your "hostsfile"?
>
> On Aug 28, 2012, at 2:33 PM, Brian Budge <brian.bu...@gmail.com> wrote:
>
>
< size << std::endl;
}
std::cerr << "slave responding..." << std::endl;
MPI_Finalize();
return 0;
}
Any ideas? Thanks for any help.
Brian
On Wed, Aug 22, 2012 at 9:03 AM, Ralph Castain <r...@open-mpi.org> wrote:
> It really is just
Hi. I know this is an old thread, but I'm curious if there are any
tutorials describing how to set this up? Is this still available on
newer open mpi versions?
Thanks,
Brian
On Fri, Jan 4, 2008 at 7:57 AM, Ralph Castain wrote:
> Hi Elena
>
> I'm copying this to the user list
d if there is
nothing else important for the processor to turn to, a fast MPI_Recv is what
matters.
Dick Treumann - MPI Team
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846 Fax (845) 433-8363
From: Brian Budge <bri
Hi all -
I just ran a small test to find out the overhead of an MPI_Recv call
when no communication is occurring. It seems quite high. I noticed
during my google excursions that openmpi does busy waiting. I also
noticed that the option to -mca mpi_yield_when_idle seems not to help
much (in
yes, sorry. I did mean 1.5. In my case, going back to 1.43 solved my
oom problem.
On Sun, Oct 17, 2010 at 4:57 PM, Ralph Castain <r...@open-mpi.org> wrote:
> There is no OMPI 2.5 - do you mean 1.5?
>
> On Oct 17, 2010, at 4:11 PM, Brian Budge wrote:
>
>> Hi Jody -
&g
Hi Jody -
I noticed this exact same thing the other day when I used OpenMPI v
2.5 built with valgrind support. I actually ran out of memory due to
this. When I went back to v 2.43, my program worked fine.
Are you also using 2.5?
Brian
On Wed, Oct 6, 2010 at 4:32 AM, jody
org> wrote:
>
> On Jul 12, 2010, at 11:12 AM, Brian Budge wrote:
>
>> HI Ralph -
>>
>> Thanks for the reply. I think this patch sounds great! The idea in
>> our software is that it won't be known until after the program is
>> running whet
ite some graphic tool
> which will call mpirun or mpiexec. But somewhere you have to tell OpenMPI
> what to run on how many processors etc.
>
> I'd suggest you take a look at the "MPI-The Complete Reference" Vol I and II
>
> Jody
>
> On Mon, Jul 12, 2010 at
an external program like mpirun. Is there a plan for this to
enter the mainline?
Brian
On Mon, Jul 12, 2010 at 8:29 AM, Ralph Castain <r...@open-mpi.org> wrote:
>
> On Jul 12, 2010, at 9:07 AM, Brian Budge wrote:
>
>> Hi Jody -
>>
>> Thanks for the reply. is there a w
rguments
> will be set to an intercommunicator of thes spawner and the spawnees.
> You can use this intercommunicator as the communicator argument
> in the MPI_functions.
>
> Jody
> On Fri, Jul 9, 2010 at 5:56 PM, Brian Budge <brian.bu...@gmail.com> wrote:
>> Hi all -
>>
>> I'v
I believe that it specifies the *minimum* threading model supported. If I
recall, opmi is already funnel safe even in single mode. However, if mpi
calls are made from outside the main thread, you should specify funneled for
portability
Brian
On Mar 2, 2010 11:59 PM, "Terry Frankcombe"
Is your code multithreaded?
On Feb 25, 2010 12:56 AM, "Amr Hassan" wrote:
Thanks alot for your reply,
I'm using blocking Send and Receive. All the clients are sending data and
the server is receive the messages from the clients with MPI_ANY_SOURCE as
the sender. Do you
We've seen similar things in our code. In our case it is probably due to a
race condition. Try running the segv'ing process in a debugger, and it will
likely show you a bug in your code
On Feb 24, 2010 9:36 PM, "Amr Hassan" wrote:
Hi All,
I'm facing a strange problem
Hi all -
We are receiving an error of MPI_ERR_TRUNCATE from MPI_Test (after
enabling the RETURN error handler). I'm confused as to what might
cause this, as I was assuming that this generally resulted from a recv
call being made requesting fewer bytes than were sent.
Can anyone shed some light
One small (or to some, not so small) note is that full multi-threading with
OpenMPI is very unlikely to work with infiniband right now.
Brian
On Mon, Mar 10, 2008 at 6:24 AM, Michael wrote:
> Quick answer, till you get a complete answer, Yes, OpenMPI has long
> supported most
:
>
> > Brian,
> >Here is how I do it:
> >
> > ./configure --prefix /opt/openmpi-1.2.4 --with-openib=/usr/local/
> > ofed \
> > --without-tm CC=icc CXX=icpc F77=ifort FC=ifort \
> > --with-threads=posix --enable-mpi-threads
> >
> >
> >
Hi all -
I have been using OpenMPI for quite a while now, and its working out great.
I was looking at the FAQ and trying to figure out how to configure OpenMPI
with infiniband. It shows how to enable IB pointing to the OFED directory.
I have infiniband built into the kernel, along with IP over
Henry -
OpenMP and OpenMPI are two different things. OpenMP is a way to
automatically (in limited situations) parallelize your code using a
threading model. OpenMPI is an MPI implementation. MPI is a message
passing standard, which usually parallelizes computation on a process
basis.
Brian
Thanks! That appears to have done it.
Brian
On 1/17/07, Scott Atchley <atch...@myri.com> wrote:
On Jan 17, 2007, at 10:45 AM, Brian Budge wrote:
> Hi Adrian -
>
> Thanks for the reply. I have been investigating this further. It
> appears that ssh isn't start
ue, Jan 16, 2007 at 05:22:35PM -0800, Brian Budge wrote:
> Hi all -
Hi!
> If I run from host-0:
> > mpirun -np 4 -host host-0 myprogram
>
> I have no problems, but if I run
> >mpirun -np 4 -host host-1 myprogram
> error while loading shared libraries: libSGUL.so: can
Hi all -
I'm having a bit of an issue with my library paths and mpi that I can't
quite seem to resolve.
If I run from host-0:
mpirun -np 4 -host host-0 myprogram
I have no problems, but if I run
mpirun -np 4 -host host-1 myprogram
I get an error like this:
error while loading shared
OMPI's PML for some other project? Or are you writing MPI
applications? Or ...?
On Nov 2, 2006, at 2:22 PM, Brian Budge wrote:
> Thanks for the pointer, it was a very interesting read.
>
> It seems that by default OpenMPI uses the nifty pipelining trick
> with pinning pages while tra
order to
register a different memory segment for another memory transfer.
Brian
On Nov 2, 2006, at 12:22 PM, Brian Budge wrote:
> Thanks for the pointer, it was a very interesting read.
>
> It seems that by default OpenMPI uses the nifty pipelining trick
> with pinning pages while tra
searched around, but it's
possible I just used the wrong search terms.
Thanks,
Brian
On 11/2/06, Jeff Squyres <jsquy...@cisco.com> wrote:
This paper explains it pretty well:
http://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols/
On Nov 2, 2006, at 1:37 PM, Brian Budge wrote:
Hi all -
I'm wondering how DMA is handled in OpenMPI when using the infiniband
protocol. In particular, will I get a speed gain if my read/write buffers
are already pinned via mlock?
Thanks,
Brian
51 matches
Mail list logo