Re: [OMPI devel] v1.10.3rc3 out for test

2016-05-25 Thread Burette, Yohann
Hi Ralph,

Just a quick note to let you know that I ran this rc3 with the OFI MTL on our 
PSM nodes. No issue to report.
I ran both shared and static builds under RHEL7.

Hope this helps,
Yohann

From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Tuesday, May 24, 2016 8:49 PM
To: Open MPI Developers 
Subject: [OMPI devel] v1.10.3rc3 out for test

Hi folks

I believe this is ready for final test now. Please give it a whirl and let us 
know!

https://www.open-mpi.org/software/ompi/v1.10/

Ralph



Re: [OMPI devel] 1.10.0rc2

2015-07-23 Thread Burette, Yohann
Paul,

While looking at the issue, we noticed that we were missing some code that 
deals with MTL priorities.

PR 409 (https://github.com/open-mpi/ompi-release/pull/409) is attempting to fix 
that.

Hopefully, this will also fix the error you encountered.

Thanks again,
Yohann

From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Paul Hargrove
Sent: Wednesday, July 22, 2015 12:07 PM
To: Open MPI Developers
Subject: Re: [OMPI devel] 1.10.0rc2

Yohann,

Things run fine with those additional flags.
In fact, adding just "--mca pml cm" is sufficient to eliminate the SEGV.

-Paul

On Wed, Jul 22, 2015 at 8:49 AM, Burette, Yohann 
<yohann.bure...@intel.com<mailto:yohann.bure...@intel.com>> wrote:
Hi Paul,

Thank you for doing all this testing!

About 1), it’s hard for me to see whether it’s a problem with mtl:ofi or with 
how OMPI selects the components to use.
Could you please run your test again with “--mca mtl ofi --mca mtl_ofi_provider 
sockets --mca pml cm”?
The idea is that if it still fails, then we have a problem with either mtl:ofi 
or the OFI/sockets provider. If it works, then there is an issue with how OMPI 
selects what component to use.

I just tried 1.10.0rc2 with the latest libfabric (master) and it seems to work 
fine.

Yohann

From: devel 
[mailto:devel-boun...@open-mpi.org<mailto:devel-boun...@open-mpi.org>] On 
Behalf Of Paul Hargrove
Sent: Wednesday, July 22, 2015 1:05 AM
To: Open MPI Developers
Subject: Re: [OMPI devel] 1.10.0rc2

1.10.0rc2 looks mostly good to me, but I still found some issues.


1) New to this round of testing, I have built mtl:ofi with gcc, pgi, icc, 
clang, open64 and studio compilers.
I have only the sockets provider in libfaric (v1.0.0 and 1.1.0rc2).
However, unless I pass "-mca mtl ^ofi" to mpirun I get a SEGV from a callback 
invoked in opal_progress().
Gdb did not give a function name for the  callback, but the PC looks valid.


2) Of the several compilers I tried, only pgi-13.0 failed to compile mtl:ofi:

/bin/sh ../../../../libtool  --tag=CC   --mode=compile pgcc 
-DHAVE_CONFIG_H -I. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/mca/mtl/ofi
 -I../../../../opal/include -I../../../../orte/include 
-I../../../../ompi/include -I../../../../oshmem/include 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/private/autogen 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/hwloc/autogen  
-I/usr/common/ftg/libfabric/1.1.0rc2p1/include 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2
 -I../../../.. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/orte/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/oshmem/include
   
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/hwloc/hwloc191/hwloc/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/BLD/opal/mca/hwloc/hwloc191/hwloc/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/event/libevent2021/libevent
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/event/libevent2021/libevent/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/BLD/opal/mca/event/libevent2021/libevent/include
  -g  -c -o mtl_ofi_component.lo 
/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/mca/mtl/ofi/mtl_ofi_component.c
libtool: compile:  pgcc -DHAVE_CONFIG_H -I. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/mca/mtl/ofi
 -I../../../../opal/include -I../../../../orte/include 
-I../../../../ompi/include -I../../../../oshmem/include 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/private/autogen 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/hwloc/autogen 
-I/usr/common/ftg/libfabric/1.1.0rc2p1/include 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2
 -I../../../.. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/orte/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/oshmem/include

Re: [OMPI devel] 1.10.0rc2

2015-07-22 Thread Burette, Yohann
Hi Paul,

Thank you for doing all this testing!

About 1), it’s hard for me to see whether it’s a problem with mtl:ofi or with 
how OMPI selects the components to use.
Could you please run your test again with “--mca mtl ofi --mca mtl_ofi_provider 
sockets --mca pml cm”?
The idea is that if it still fails, then we have a problem with either mtl:ofi 
or the OFI/sockets provider. If it works, then there is an issue with how OMPI 
selects what component to use.

I just tried 1.10.0rc2 with the latest libfabric (master) and it seems to work 
fine.

Yohann

From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Paul Hargrove
Sent: Wednesday, July 22, 2015 1:05 AM
To: Open MPI Developers
Subject: Re: [OMPI devel] 1.10.0rc2

1.10.0rc2 looks mostly good to me, but I still found some issues.


1) New to this round of testing, I have built mtl:ofi with gcc, pgi, icc, 
clang, open64 and studio compilers.
I have only the sockets provider in libfaric (v1.0.0 and 1.1.0rc2).
However, unless I pass "-mca mtl ^ofi" to mpirun I get a SEGV from a callback 
invoked in opal_progress().
Gdb did not give a function name for the  callback, but the PC looks valid.


2) Of the several compilers I tried, only pgi-13.0 failed to compile mtl:ofi:

/bin/sh ../../../../libtool  --tag=CC   --mode=compile pgcc 
-DHAVE_CONFIG_H -I. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/mca/mtl/ofi
 -I../../../../opal/include -I../../../../orte/include 
-I../../../../ompi/include -I../../../../oshmem/include 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/private/autogen 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/hwloc/autogen  
-I/usr/common/ftg/libfabric/1.1.0rc2p1/include 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2
 -I../../../.. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/orte/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/oshmem/include
   
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/hwloc/hwloc191/hwloc/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/BLD/opal/mca/hwloc/hwloc191/hwloc/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/event/libevent2021/libevent
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/event/libevent2021/libevent/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/BLD/opal/mca/event/libevent2021/libevent/include
  -g  -c -o mtl_ofi_component.lo 
/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/mca/mtl/ofi/mtl_ofi_component.c
libtool: compile:  pgcc -DHAVE_CONFIG_H -I. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/mca/mtl/ofi
 -I../../../../opal/include -I../../../../orte/include 
-I../../../../ompi/include -I../../../../oshmem/include 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/private/autogen 
-I../../../../opal/mca/hwloc/hwloc191/hwloc/include/hwloc/autogen 
-I/usr/common/ftg/libfabric/1.1.0rc2p1/include 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2
 -I../../../.. 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/orte/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/ompi/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/oshmem/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/hwloc/hwloc191/hwloc/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/BLD/opal/mca/hwloc/hwloc191/hwloc/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/event/libevent2021/libevent
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/openmpi-1.10.0rc2/opal/mca/event/libevent2021/libevent/include
 
-I/global/homes/h/hargrove/GSCRATCH/OMPI/openmpi-1.10.0rc2-linux-x86_64-pgi-13.10/BLD/opal/mca/event/libevent2021/libevent/include
 -g -c 

Re: [OMPI devel] Hang in IMB-RMA?

2015-05-20 Thread Burette, Yohann
Hi Nathan,

Not entirely sure it is related but I'm also seeing a hang at the end of 
Put_all_local with -mca pml ob1 -mca btl tcp,sm,self.
It seems to have finished the test but doesn't proceed to the next one. When 
run alone, Put_all_local finishes fine.

Also, I verified with master and I see no hang.

Yohann

-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Nathan Hjelm
Sent: Tuesday, May 12, 2015 11:59 AM
To: Open MPI Developers
Subject: Re: [OMPI devel] Hang in IMB-RMA?


Thanks! I will look at osc/rdma in 1.8 and see about patching the bug. The RMA 
code in master and 1.8 has diverged significantly but it shouldn't be too 
dificult to fix.

-Nathan

On Tue, May 12, 2015 at 06:50:41PM +, Friedley, Andrew wrote:
> Hi Nathan,
> 
> I should have thought to do that.  Yes, the issue seems to be fixed on master 
> -- no hangs on PSM, openib, or tcp.
> 
> Andrew
> 
> > -Original Message-
> > From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Nathan 
> > Hjelm
> > Sent: Tuesday, May 12, 2015 9:44 AM
> > To: Open MPI Developers
> > Subject: Re: [OMPI devel] Hang in IMB-RMA?
> > 
> > 
> > Thanks for the report. Can you try with master and see if the issue 
> > is fixed there?
> > 
> > -Nathan
> > 
> > On Tue, May 12, 2015 at 04:38:01PM +, Friedley, Andrew wrote:
> > > Hi,
> > >
> > > I've run into a problem with the IMB-RMA exchange_get test.  At 
> > > this point
> > I suspect it's an issue in Open MPI or the test itself.  Could 
> > someone take a look?
> > >
> > > I'm running Open MPI 1.8.5 and IMB 4.0.2.  MVAPICH2 is able to run 
> > > all of
> > IMB-RMA successfully.
> > >
> > >  mpirun -np 4 -mca pml ob1 -mca btl tcp,sm,self ./IMB-RMA
> > >
> > > Eventually hangs at the end of exchange_get (after 4mb is 
> > > reported)
> > running the np=2 pass.  IMB runs every np power of 2 up to and 
> > including the np given on the command line.  So, with mpirun -np 4 
> > above, IMB runs each of its tests with np=2 and then with np=4.
> > >
> > > If I run just the exchange_get test, the same thing happens:
> > >
> > >  mpirun -np 4 -mca pml ob1 -mca btl tcp,sm,self ./IMB-RMA 
> > > exchange_get
> > >
> > > If I run either of the above commands with -np 2, IMB-RMA 
> > > successfully
> > runs to completion.
> > >
> > > I have reproduced with tcp, verbs, and PSM -- does not appear to 
> > > be
> > transport specific.  MVAPICH2 2.0 works.
> > >
> > > Below are bracktraces from two of the four ranks.  The other two 
> > > ranks
> > each have a backtrace similar to these two.
> > >
> > > Thanks!
> > >
> > > Andrew
> > >
> > > #0  0x7fca39a4c0c7 in sched_yield () from /lib64/libc.so.6
> > > #1  0x7fca393ef2fb in opal_progress () at
> > > runtime/opal_progress.c:197
> > > #2  0x7fca33cd21f5 in opal_condition_wait (m=0x247fc70, c=0x247fcd8)
> > > at ../../../../opal/threads/condition.h:78
> > > #3  ompi_osc_rdma_flush_lock (module=module@entry=0x247fb50,
> > lock=0x2481a20,
> > > target=target@entry=3) at
> > > osc_rdma_passive_target.c:530
> > > #4  0x7fca33cd43bd in ompi_osc_rdma_flush (target=3,
> > win=0x2482150)
> > > at osc_rdma_passive_target.c:578
> > > #5  0x7fca39fe5654 in PMPI_Win_flush (rank=3, win=0x2482150)
> > > at pwin_flush.c:58
> > > #6  0x0040aec5 in IMB_rma_exchange_get ()
> > > #7  0x00406a35 in IMB_warm_up ()
> > > #8  0x004023bd in main ()
> > >
> > > #0  0x7f1c81890bdd in poll () from /lib64/libc.so.6
> > > #1  0x7f1c81271c86 in poll_dispatch (base=0x1be8350,
> > tv=0x7fff4c323480)
> > > at poll.c:165
> > > #2  0x7f1c81269aa4 in opal_libevent2021_event_base_loop
> > (base=0x1be8350,
> > > flags=2) at event.c:1633
> > > #3  0x7f1c812232e8 in opal_progress () at
> > > runtime/opal_progress.c:169
> > > #4  0x7f1c7b9641f5 in opal_condition_wait (m=0x1ccf4a0, c=0x1ccf508)
> > > at ../../../../opal/threads/condition.h:78
> > > #5  ompi_osc_rdma_flush_lock (module=module@entry=0x1ccf380,
> > lock=0x23287f0,
> > > target=target@entry=0) at
> > > osc_rdma_passive_target.c:530
> > > #6  0x7f1c7b9663bd in ompi_osc_rdma_flush (target=0,
> > win=0x2317d00)
> > > at osc_rdma_passive_target.c:578
> > > #7  0x7f1c81e19654 in PMPI_Win_flush (rank=0, win=0x2317d00)
> > > at pwin_flush.c:58
> > > #8  0x0040aec5 in IMB_rma_exchange_get ()
> > > #9  0x00406a35 in IMB_warm_up ()
> > > #10 0x004023bd in main ()
> > > ___
> > > devel mailing list
> > > de...@open-mpi.org
> > > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> > > Link to this post:
> > > http://www.open-mpi.org/community/lists/devel/2015/05/17396.php
> ___
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post: 
> 

Re: [OMPI devel] Master warnings

2015-01-27 Thread Burette, Yohann
I fixed the compiler warnings for the OFI MTL.

Yohann

From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of George Bosilca
Sent: Tuesday, January 27, 2015 10:26 AM
To: Open MPI Developers
Subject: Re: [OMPI devel] Master warnings

I took care of the TCP warnings.

  George.



On Tue, Jan 27, 2015 at 7:20 AM, Ralph Castain 
> wrote:

btl_tcp_frag.c: In function 'mca_btl_tcp_frag_dump':

btl_tcp_frag.c:99: warning: comparison between signed and unsigned

btl_tcp_frag.c:104: warning: comparison between signed and unsigned



mtl_ofi.c: In function 'ompi_mtl_ofi_add_procs':

mtl_ofi.c:108: warning: comparison between signed and unsigned

mtl_ofi.c:111: warning: format '%s' expects type 'char *', but argument 6 has 
type 'int'



base/memheap_base_mkey.c: In function 'oshmem_mkey_recv_cb':

base/memheap_base_mkey.c:375: warning: comparison of distinct pointer types 
lacks a cast


___
devel mailing list
de...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
Link to this post: 
http://www.open-mpi.org/community/lists/devel/2015/01/16825.php



Re: [OMPI devel] Changed behaviour with PSM on master

2015-01-09 Thread Burette, Yohann
Hi,

For those of you who don't know me, my name is Yohann Burette, I work for Intel 
and I contributed the OFI MTL.

AFAIK, the PSM MTL should have the priority over the OFI MTL.

Please excuse my ignorance but is there a way to express this priority in the 
MTLs? Here is what is in ompi/mca/mtl/base/mtl_base_frame.c:

/*
 * Function for selecting one component from all those that are
 * available.
 *
 * For now, we take the first component that says it can run.  Might
 * need to reexamine this at a later time.
 */
int
ompi_mtl_base_select(bool enable_progress_threads,
 bool enable_mpi_threads)

Am I missing anything?

Thanks in advance,
Yohann

-Original Message-
From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of Jeff Squyres 
(jsquyres)
Sent: Friday, January 09, 2015 1:27 PM
To: Open MPI Developers List
Subject: Re: [OMPI devel] Changed behaviour with PSM on master

+1 -- someone should file a bug.

I think Intel needs to decide how they want to handle this (e.g., whether the 
PSM MTL or OFI MTL should be the default, and how the other can detect if it's 
not the default and therefore it's safe to call psm_init... or something like 
that).


On Jan 9, 2015, at 4:10 PM, Howard Pritchard  wrote:

> HI Adrian,
> 
> Please open an issue.  We don't want users having to explicitly 
> specify the mtl to use just to get a job to run on a intel/infinipath system.
> 
> Howard
> 
> 2015-01-09 13:04 GMT-07:00 Adrian Reber :
> Should I still open a ticket? Will these be changed or do I always 
> have to provide '--mca mtl psm' in the future?
> 
> On Fri, Jan 09, 2015 at 12:27:59PM -0700, Howard Pritchard wrote:
> > HI Adrian, Andrew,
> >
> > Sorry try again,  both the libfabric psm provider and the open mpi 
> > psm mtl are trying to use psm_init.
> >
> > So, to avoid this problem, add
> >
> > --mca mtl psm
> >
> > to your mpirun command line.
> >
> > Sorry for the confusion.
> >
> > Howard
> >
> >
> > 2015-01-09 7:52 GMT-07:00 Friedley, Andrew :
> >
> > > No this is not expected behavior.
> > >
> > > The PSM MTL code has not changed in 2 months, when I fixed that 
> > > unused variable warning for you.  That suggests something above 
> > > the PSM MTL broke things.  I see no reason your older software 
> > > install should suddenly stopping working if all you are updating 
> > > is OMPI master -- at least with respect to PSM anyway.
> > >
> > > The error message is right, it's not possible to open more than 
> > > one context per process.  This hasn't changed.  It does indicate 
> > > that maybe something is causing the MTL to be opened twice in each 
> > > process?
> > >
> > > Andrew
> > >
> > > > -Original Message-
> > > > From: devel [mailto:devel-boun...@open-mpi.org] On Behalf Of 
> > > > Adrian Reber
> > > > Sent: Friday, January 9, 2015 4:13 AM
> > > > To: de...@open-mpi.org
> > > > Subject: [OMPI devel] Changed behaviour with PSM on master
> > > >
> > > > Running the mpi_test_suite on master used to work with no 
> > > > problems. At some point in time it stopped working however and 
> > > > now I get only error messages from PSM:
> > > >
> > > > """
> > > > n050301:3.0.In PSM version 1.14, it is not possible to open more 
> > > > than
> > > one
> > > > context per process
> > > >
> > > > [n050301:26526] Open MPI detected an unexpected PSM error in 
> > > > opening an
> > > > endpoint: In PSM version 1.14, it is not possible to open more 
> > > > than one context per process """
> > > >
> > > > I know that I do not have the newest version of the PSM library 
> > > > and that
> > > I
> > > > need to update the library but as this requires many software 
> > > > packages
> > > to be
> > > > re-compiled we are trying to avoid it on our CentOS6 based system.
> > > >
> > > > My main question (probably for Andrew) is if this is an expected
> > > behaviour
> > > > on master. It works on 1.8.x and it used to work on master at 
> > > > least
> > > until 2014-
> > > > 12-08.
> > > >
> > > > This is the last MTT entry for working PSM (with my older 
> > > > version)
> > > > http://mtt.open-mpi.org/index.php?do_redir=2226
> > > >
> > > > and since a few days it fails on master
> > > > http://mtt.open-mpi.org/index.php?do_redir=2225
> > > >
> > > > On another system (RHEL7) with newer PSM libraries there is no 
> > > > such
> > > error.
> > > >
> > > >   Adrian
> > > ___
> > > devel mailing list
> > > de...@open-mpi.org
> > > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> > > Link to this post:
> > > http://www.open-mpi.org/community/lists/devel/2015/01/16766.php
> > >
> 
> > ___
> > devel mailing list
> > de...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> > Link to this post: 
> > http://www.open-mpi.org/community/lists/devel/2015/01/16769.php
>