Hi,
i was able to build openmpi 1.10.2 with the same configure command line
(after i quoted the LDFLAGS parameters)
can you please run
grep SIZEOF_PTRDIFF_T config.status
it should be 4 or 8, but it seems different in your environment (!)
are you running 32 or 64 bit kernel ? on which p
Devon,
send() is a libc function that is used internally by Open MPI, and it uses
your user function instead of the libc ne.
simply rename your function mysend() or something else that is not used by
libc, and your issue will likely be fixed
Cheers,
Gilles
On Tuesday, May 10, 2016, Devon Hollow
Hello,
I am having trouble understanding why I am getting an error when running
the program produced by the attached C file. In this file, there are three
short functions: send(), bounce() and main(). send() and bounce() both use
MPI_Send() and MPI_Recv(), but critically, neither one is called fro
Hi Gus
Thanks for your suggestion. But I am not using any resource manager (i.e. I
am launching mpirun from the bash shell.). In fact, both of the two
clusters I talked about run CentOS 7 and I launch the job the same way on
both of these, yet one of them creates standard core files and the other
Hello Llolsten
Is there a specific reason you run as root? This practice is discouraged,
isn't it?
Also, isn't it true that OMPI uses ephemeral (i.e. 'user level, randomly
chosen') ports for TCP transport? In that case, how did this ever worked
with a firewall enabled?
I have, in the past, have
Hello all,
We've been running openmpi for a long time and up to version 1.8.2 and
CentOS 6.7 with commands such as the one below:
usr/local/bin/mpirun --allow-run-as-root --mca btl openib,self,sm --mca pml
ob1 -np 2 -np 8 -hostfile /root/mpi-hosts /usr/local/bin/IMB-MPI1
To be able to run
Greetings,
I am trying to install OpenMPI 1.10.1/1.10.2 with gcc (GCC) 5.2.1 20150902 (Red
Hat 5.2.1-2) statically,
$ ./configure --prefix=/home/ilias/bin/openmpi-1.10.1_gnu_static CXX=g++ CC=gcc
F77=gfortran FC=gfortran LDFLAGS=--static -ldl -lrt --disable-shared
--enable-static --disable-v
Hi Durga
Just in case ...
If you're using a resource manager to start the jobs (Torque, etc),
you need to have them set the limits (for coredump size, stacksize,
locked memory size, etc).
This way the jobs will inherit the limits from the
resource manager daemon.
On Torque (which I use) I do th
Jeff,
Thanks for the explanation. It's very clear.
Best regards,
Zhen
On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) wrote:
> On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
> >
> > I have another question. I thought MPI_Test is a local call, meaning it
> doesn't send/receive message.
On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
>
> I have another question. I thought MPI_Test is a local call, meaning it
> doesn't send/receive message. Am I misunderstanding something? Thanks again.
>From the user's perspective, MPI_TEST is a local call, in that it checks to
>see if an MPI_Re
Jeff,
I have another question. I thought MPI_Test is a local call, meaning it
doesn't send/receive message. Am I misunderstanding something? Thanks again.
Best regards,
Zhen
On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres)
wrote:
> It's taking so long because you are sleeping for .1 sec
I've send you all the outputs from configure, make and make install
commands...
Today I've compiled openmpi with the latest gcc version (6.1.1) shipped
with my archlinux distro and everything seems ok, so I think that the
problem is with intel compiler.
Giacomo Rossi Ph.D., Space Engineer
Resear
12 matches
Mail list logo