I've send you all the outputs from configure, make and make install
commands...
Today I've compiled openmpi with the latest gcc version (6.1.1) shipped
with my archlinux distro and everything seems ok, so I think that the
problem is with intel compiler.
Giacomo Rossi Ph.D., Space Engineer
Jeff,
I have another question. I thought MPI_Test is a local call, meaning it
doesn't send/receive message. Am I misunderstanding something? Thanks again.
Best regards,
Zhen
On Thu, May 5, 2016 at 9:45 PM, Jeff Squyres (jsquyres)
wrote:
> It's taking so long because you
On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
>
> I have another question. I thought MPI_Test is a local call, meaning it
> doesn't send/receive message. Am I misunderstanding something? Thanks again.
>From the user's perspective, MPI_TEST is a local call, in that it checks to
Jeff,
Thanks for the explanation. It's very clear.
Best regards,
Zhen
On Mon, May 9, 2016 at 10:19 AM, Jeff Squyres (jsquyres) wrote:
> On May 9, 2016, at 8:23 AM, Zhen Wang wrote:
> >
> > I have another question. I thought MPI_Test is a local call,
Hi Durga
Just in case ...
If you're using a resource manager to start the jobs (Torque, etc),
you need to have them set the limits (for coredump size, stacksize,
locked memory size, etc).
This way the jobs will inherit the limits from the
resource manager daemon.
On Torque (which I use) I do
Greetings,
I am trying to install OpenMPI 1.10.1/1.10.2 with gcc (GCC) 5.2.1 20150902 (Red
Hat 5.2.1-2) statically,
$ ./configure --prefix=/home/ilias/bin/openmpi-1.10.1_gnu_static CXX=g++ CC=gcc
F77=gfortran FC=gfortran LDFLAGS=--static -ldl -lrt --disable-shared
--enable-static
Hi Gus
Thanks for your suggestion. But I am not using any resource manager (i.e. I
am launching mpirun from the bash shell.). In fact, both of the two
clusters I talked about run CentOS 7 and I launch the job the same way on
both of these, yet one of them creates standard core files and the other
Hello,
I am having trouble understanding why I am getting an error when running
the program produced by the attached C file. In this file, there are three
short functions: send(), bounce() and main(). send() and bounce() both use
MPI_Send() and MPI_Recv(), but critically, neither one is called
Devon,
send() is a libc function that is used internally by Open MPI, and it uses
your user function instead of the libc ne.
simply rename your function mysend() or something else that is not used by
libc, and your issue will likely be fixed
Cheers,
Gilles
On Tuesday, May 10, 2016, Devon
Hi,
i was able to build openmpi 1.10.2 with the same configure command line
(after i quoted the LDFLAGS parameters)
can you please run
grep SIZEOF_PTRDIFF_T config.status
it should be 4 or 8, but it seems different in your environment (!)
are you running 32 or 64 bit kernel ? on which
Hello,
When I try to use hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET
I obtain NULL as a pointer and the error message is : Invalid Argument.
I try without, it works. It works also with HWLOC_MEMBIND_STRICMT and/or
HWLOC_MEMBIND_THREAD.
My hwloc version is :
~/usr/bin/hwloc-bind
Hello Hugo,
Can you send your code and a description of the machine so that I try to
reproduce ?
By the way, BYNODESET is also available in 1.11.3.
Brice
Le 09/05/2016 16:18, Hugo Brunie a écrit :
> Hello,
>
> When I try to use hwloc_alloc_membind with HWLOC_MEMBIND_BYNODESET
> I obtain NULL
So.. I downloaded the last library and it works fine.
I am sorry for the useless topic. (I had link with an old version of the lib in
my LD_LIBRARY_PATH that's why it didn't work well)
Hugo BRUNIE
- Mail original -
> De: "Hugo Brunie"
> À: "Hardware locality
Greetings!
We've been receiving this error for a while on our 64-core Interlagos
AMD machines:
* hwloc has encountered what looks like an error from the operating system.
*
* Socket (P#2 cpuset 0x,0x0)
Sorry for the typo in the subject, I meant "Topology" ;)
On 5/9/16 5:58 PM, Mehmet Belgin wrote:
Greetings!
We've been receiving this error for a while on our 64-core Interlagos
AMD machines:
* hwloc has
Le 09/05/2016 23:58, Mehmet Belgin a écrit :
> Greetings!
>
> We've been receiving this error for a while on our 64-core Interlagos
> AMD machines:
>
>
>
> * hwloc has encountered what looks like an error from the
Thank you Brice for your quick reply! We will give BIOS upgrade a try
and share our findings with the list.
-Mehmet
On 5/9/16 6:10 PM, Brice Goglin wrote:
Le 09/05/2016 23:58, Mehmet Belgin a écrit :
Greetings!
We've been receiving this error for a while on our 64-core Interlagos
AMD
Hi folks
As we look at the Python client, there is an issue with the supported
Python version. There was a significant break in the user-level API between
Python 2.x and Python 3. Some of the issues are described here:
https://docs.python.org/2/glossary.html#term-2to3
Noah and I have chatted
Is it possible to give a friendly error message at run time if you accidentally
run with Python 3.x?
> On May 9, 2016, at 12:37 PM, Ralph Castain wrote:
>
> Hi folks
>
> As we look at the Python client, there is an issue with the supported Python
> version. There was a
Good question - I'm not sure we can. What happens right now is you get
syntax errors during the compile. I'll have to play and see if we can
generate an error message before we hit that point.
On Mon, May 9, 2016 at 9:38 AM, Jeff Squyres (jsquyres)
wrote:
> Is it possible to
20 matches
Mail list logo