Re: [OMPI users] mpi program gets stuck

2022-11-29 Thread Jeff Squyres (jsquyres) via users
(we've conversed a bit off-list; bringing this back to the list with a good 
subject to differentiate it from other digest threads)

I'm glad the tarball I provided (that included the PMIx fix) resolved running 
"uptime" for you.

Can you try running a plain C MPI program instead of a Python MPI program?  
That would just eliminate a few more variables from the troubleshooting process.

In the "examples" directory in the tarball I provided are trivial "hello world" 
and "ring" MPI programs.  A "make" should build them all.  Try running hello_c 
and ring_c.

--
Jeff Squyres
jsquy...@cisco.com

From: timesir 
Sent: Tuesday, November 29, 2022 10:42 AM
To: Jeff Squyres (jsquyres) ; Open MPI Users 

Subject: mpi program gets stuck


see also: https://pastebin.com/s5tjaUkF

(py3.9) ➜  /share  cat hosts
192.168.180.48 slots=1
192.168.60.203 slots=1

1.  This command now runs correctly using your openmpi-gitclone-pr11096.tar.bz2
(py3.9) ➜  /share mpirun -n 2 --machinefile hosts --mca plm_base_verbose 100 
--mca rmaps_base_verbose 100 --mca ras_base_verbose 100 uptime


2. But this command gets stuck. It seems to be the mpi program that gets stuck.
test.py:
import mpi4py
from mpi4py import MPI

(py3.9) ➜  /share mpirun -n 2 --machinefile hosts --mca plm_base_verbose 100 
--mca rmaps_base_verbose 100 --mca ras_base_verbose 100 python test.py
[computer01:47982] mca: base: component_find: searching NULL for plm components
[computer01:47982] mca: base: find_dyn_components: checking NULL for plm 
components
[computer01:47982] pmix:mca: base: components_register: registering framework 
plm components
[computer01:47982] pmix:mca: base: components_register: found loaded component 
slurm
[computer01:47982] pmix:mca: base: components_register: component slurm 
register function successful
[computer01:47982] pmix:mca: base: components_register: found loaded component 
ssh
[computer01:47982] pmix:mca: base: components_register: component ssh register 
function successful
[computer01:47982] mca: base: components_open: opening plm components
[computer01:47982] mca: base: components_open: found loaded component slurm
[computer01:47982] mca: base: components_open: component slurm open function 
successful
[computer01:47982] mca: base: components_open: found loaded component ssh
[computer01:47982] mca: base: components_open: component ssh open function 
successful
[computer01:47982] mca:base:select: Auto-selecting plm components
[computer01:47982] mca:base:select:(  plm) Querying component [slurm]
[computer01:47982] mca:base:select:(  plm) Querying component [ssh]
[computer01:47982] [[INVALID],0] plm:ssh_lookup on agent ssh : rsh path NULL
[computer01:47982] mca:base:select:(  plm) Query of component [ssh] set 
priority to 10
[computer01:47982] mca:base:select:(  plm) Selected component [ssh]
[computer01:47982] mca: base: close: component slurm closed
[computer01:47982] mca: base: close: unloading component slurm
[computer01:47982] [prterun-computer01-47982@0,0] plm:ssh_setup on agent ssh : 
rsh path NULL
[computer01:47982] [prterun-computer01-47982@0,0] plm:base:receive start comm
[computer01:47982] mca: base: component_find: searching NULL for ras components
[computer01:47982] mca: base: find_dyn_components: checking NULL for ras 
components
[computer01:47982] pmix:mca: base: components_register: registering framework 
ras components
[computer01:47982] pmix:mca: base: components_register: found loaded component 
simulator
[computer01:47982] pmix:mca: base: components_register: component simulator 
register function successful
[computer01:47982] pmix:mca: base: components_register: found loaded component 
pbs
[computer01:47982] pmix:mca: base: components_register: component pbs register 
function successful
[computer01:47982] pmix:mca: base: components_register: found loaded component 
slurm
[computer01:47982] pmix:mca: base: components_register: component slurm 
register function successful
[computer01:47982] mca: base: components_open: opening ras components
[computer01:47982] mca: base: components_open: found loaded component simulator
[computer01:47982] mca: base: components_open: found loaded component pbs
[computer01:47982] mca: base: components_open: component pbs open function 
successful
[computer01:47982] mca: base: components_open: found loaded component slurm
[computer01:47982] mca: base: components_open: component slurm open function 
successful
[computer01:47982] mca:base:select: Auto-selecting ras components
[computer01:47982] mca:base:select:(  ras) Querying component [simulator]
[computer01:47982] mca:base:select:(  ras) Querying component [pbs]
[computer01:47982] mca:base:select:(  ras) Querying component [slurm]
[computer01:47982] mca:base:select:(  ras) No component selected!
[computer01:47982] mca: base: component_find: searching NULL for rmaps 
components
[computer01:47982] mca: base: find_dyn_components: checking NULL for rmaps 
components
[computer01:47982] pmix:mca: base: 

Re: [OMPI users] CephFS and striping_factor

2022-11-29 Thread Edgar Gabriel via users
[AMD Official Use Only - General]

I can also offer to help if there are any question regarding the ompio code, 
but I do not have the bandwidth/resources to do that myself, and more 
importantly, I do not have a platform to test the new component.
Edgar

From: users  On Behalf Of Jeff Squyres 
(jsquyres) via users
Sent: Tuesday, November 29, 2022 9:16 AM
To: users@lists.open-mpi.org
Cc: Jeff Squyres (jsquyres) 
Subject: Re: [OMPI users] CephFS and striping_factor

More specifically, Gilles created a skeleton "ceph" component in this draft 
pull request: https://github.com/open-mpi/ompi/pull/11122

If anyone has any cycles to work on it and develop it beyond the skeleton that 
is currently there, that would be great!

--
Jeff Squyres
jsquy...@cisco.com

From: users 
mailto:users-boun...@lists.open-mpi.org>> on 
behalf of Gilles Gouaillardet via users 
mailto:users@lists.open-mpi.org>>
Sent: Monday, November 28, 2022 9:48 PM
To: users@lists.open-mpi.org 
mailto:users@lists.open-mpi.org>>
Cc: Gilles Gouaillardet mailto:gil...@rist.or.jp>>
Subject: Re: [OMPI users] CephFS and striping_factor

Hi Eric,


Currently, Open MPI does not provide specific support for CephFS.

MPI-IO is either implemented by ROMIO (imported from MPICH, it does not
support CephFS today)

or the "native" ompio component (that also does not support CephFS today).


A proof of concept for CephFS in ompio might not be a huge work for
someone motivated:

That could be as simple as (so to speak, since things are generally not
easy) creating a new fs/ceph component

(e.g. in ompi/mca/fs/ceph) and implement the "file_open" callback that
uses the ceph API.

I think the fs/lustre component can be used as an inspiration.


I cannot commit to do this, but if you are willing to take a crack at
it, I can create such a component

so you can go directly to implementing the callback without spending too
much time on some Open MPI internals

(e.g. component creation).



Cheers,


Gilles


On 11/29/2022 6:55 AM, Eric Chamberland via users wrote:
> Hi,
>
> I would like to know if OpenMPI is supporting file creation with
> "striping_factor" for CephFS?
>
> According to CephFS library, I *think* it would be possible to do it
> at file creation with "ceph_open_layout".
>
> https://github.com/ceph/ceph/blob/main/src/include/cephfs/libcephfs.h
>
> Is it a possible futur enhancement?
>
> Thanks,
>
> Eric
>


[OMPI users] mpi program gets stuck

2022-11-29 Thread timesir via users
see also: https://pastebin.com/s5tjaUkF

(py3.9) ➜  /share  cat hosts
192.168.180.48 slots=1
192.168.60.203 slots=1

1.  This command now runs correctly using your
openmpi-gitclone-pr11096.tar.bz2
(py3.9) ➜  /share mpirun -n 2 --machinefile hosts --mca plm_base_verbose
100 --mca rmaps_base_verbose 100 --mca ras_base_verbose 100 uptime


2. But this command gets stuck. It seems to be the mpi program that gets
stuck.
test.py:
import mpi4py
from mpi4py import MPI

(py3.9) ➜  /share mpirun -n 2 --machinefile hosts --mca plm_base_verbose
100 --mca rmaps_base_verbose 100 --mca ras_base_verbose 100 python test.py
[computer01:47982] mca: base: component_find: searching NULL for plm
components
[computer01:47982] mca: base: find_dyn_components: checking NULL for plm
components
[computer01:47982] pmix:mca: base: components_register: registering
framework plm components
[computer01:47982] pmix:mca: base: components_register: found loaded
component slurm
[computer01:47982] pmix:mca: base: components_register: component slurm
register function successful
[computer01:47982] pmix:mca: base: components_register: found loaded
component ssh
[computer01:47982] pmix:mca: base: components_register: component ssh
register function successful
[computer01:47982] mca: base: components_open: opening plm components
[computer01:47982] mca: base: components_open: found loaded component slurm
[computer01:47982] mca: base: components_open: component slurm open
function successful
[computer01:47982] mca: base: components_open: found loaded component ssh
[computer01:47982] mca: base: components_open: component ssh open function
successful
[computer01:47982] mca:base:select: Auto-selecting plm components
[computer01:47982] mca:base:select:(  plm) Querying component [slurm]
[computer01:47982] mca:base:select:(  plm) Querying component [ssh]
[computer01:47982] [[INVALID],0] plm:ssh_lookup on agent ssh : rsh path NULL
[computer01:47982] mca:base:select:(  plm) Query of component [ssh] set
priority to 10
[computer01:47982] mca:base:select:(  plm) Selected component [ssh]
[computer01:47982] mca: base: close: component slurm closed
[computer01:47982] mca: base: close: unloading component slurm
[computer01:47982] [prterun-computer01-47982@0,0] plm:ssh_setup on agent
ssh : rsh path NULL
[computer01:47982] [prterun-computer01-47982@0,0] plm:base:receive start
comm
[computer01:47982] mca: base: component_find: searching NULL for ras
components
[computer01:47982] mca: base: find_dyn_components: checking NULL for ras
components
[computer01:47982] pmix:mca: base: components_register: registering
framework ras components
[computer01:47982] pmix:mca: base: components_register: found loaded
component simulator
[computer01:47982] pmix:mca: base: components_register: component simulator
register function successful
[computer01:47982] pmix:mca: base: components_register: found loaded
component pbs
[computer01:47982] pmix:mca: base: components_register: component pbs
register function successful
[computer01:47982] pmix:mca: base: components_register: found loaded
component slurm
[computer01:47982] pmix:mca: base: components_register: component slurm
register function successful
[computer01:47982] mca: base: components_open: opening ras components
[computer01:47982] mca: base: components_open: found loaded component
simulator
[computer01:47982] mca: base: components_open: found loaded component pbs
[computer01:47982] mca: base: components_open: component pbs open function
successful
[computer01:47982] mca: base: components_open: found loaded component slurm
[computer01:47982] mca: base: components_open: component slurm open
function successful
[computer01:47982] mca:base:select: Auto-selecting ras components
[computer01:47982] mca:base:select:(  ras) Querying component [simulator]
[computer01:47982] mca:base:select:(  ras) Querying component [pbs]
[computer01:47982] mca:base:select:(  ras) Querying component [slurm]
[computer01:47982] mca:base:select:(  ras) No component selected!
[computer01:47982] mca: base: component_find: searching NULL for rmaps
components
[computer01:47982] mca: base: find_dyn_components: checking NULL for rmaps
components
[computer01:47982] pmix:mca: base: components_register: registering
framework rmaps components
[computer01:47982] pmix:mca: base: components_register: found loaded
component ppr
[computer01:47982] pmix:mca: base: components_register: component ppr
register function successful
[computer01:47982] pmix:mca: base: components_register: found loaded
component rank_file
[computer01:47982] pmix:mca: base: components_register: component rank_file
has no register or open function
[computer01:47982] pmix:mca: base: components_register: found loaded
component round_robin
[computer01:47982] pmix:mca: base: components_register: component
round_robin register function successful
[computer01:47982] pmix:mca: base: components_register: found loaded
component seq
[computer01:47982] pmix:mca: base: components_register: component seq

Re: [OMPI users] CephFS and striping_factor

2022-11-29 Thread Jeff Squyres (jsquyres) via users
More specifically, Gilles created a skeleton "ceph" component in this draft 
pull request: https://github.com/open-mpi/ompi/pull/11122

If anyone has any cycles to work on it and develop it beyond the skeleton that 
is currently there, that would be great!

--
Jeff Squyres
jsquy...@cisco.com

From: users  on behalf of Gilles Gouaillardet 
via users 
Sent: Monday, November 28, 2022 9:48 PM
To: users@lists.open-mpi.org 
Cc: Gilles Gouaillardet 
Subject: Re: [OMPI users] CephFS and striping_factor

Hi Eric,


Currently, Open MPI does not provide specific support for CephFS.

MPI-IO is either implemented by ROMIO (imported from MPICH, it does not
support CephFS today)

or the "native" ompio component (that also does not support CephFS today).


A proof of concept for CephFS in ompio might not be a huge work for
someone motivated:

That could be as simple as (so to speak, since things are generally not
easy) creating a new fs/ceph component

(e.g. in ompi/mca/fs/ceph) and implement the "file_open" callback that
uses the ceph API.

I think the fs/lustre component can be used as an inspiration.


I cannot commit to do this, but if you are willing to take a crack at
it, I can create such a component

so you can go directly to implementing the callback without spending too
much time on some Open MPI internals

(e.g. component creation).



Cheers,


Gilles


On 11/29/2022 6:55 AM, Eric Chamberland via users wrote:
> Hi,
>
> I would like to know if OpenMPI is supporting file creation with
> "striping_factor" for CephFS?
>
> According to CephFS library, I *think* it would be possible to do it
> at file creation with "ceph_open_layout".
>
> https://github.com/ceph/ceph/blob/main/src/include/cephfs/libcephfs.h
>
> Is it a possible futur enhancement?
>
> Thanks,
>
> Eric
>



Re: [OMPI users] Question about "mca" parameters

2022-11-29 Thread Jeff Squyres (jsquyres) via users
Also, you probably want to add "vader" into your BTL specification.  Although 
the name is counter-intuitive, "vader" in Open MPI v3.x and v4.x is the shared 
memory transport.  Hence, if you run with "btl=tcp,self", you are only allowing 
MPI processes to talk via the TCP stack or process loopback (which, by 
definition, is only for a process to talk to itself) -- even if they are on the 
same node.

Instead, if you run with "btl=tcp,vader,self", then MPI processes can talk via 
TCP, process loopback, or shared memory.  Hence, if two MPI processes are on 
the same node, they can use shared memory to communicate, which is 
significantly​ faster than TCP.

NOTE:​ In the upcoming Open MPI v5.0.x, the name "vader" has (finally) been 
deprecated and replaced with the more intuitive name "sm".  While 
"btl=tcp,vader,self" will work fine in v5.0.x for backwards compatibility with 
v4.x and v3.x, "btl=tcp,sm,self" is preferred for v5.0.x and forward (and "sm" 
is just a more intuitive name than "vader").

The problem you were seeing was because the openib BTL component was 
complaining that, as the help message described, the environment was not set 
correctly to allow using the qib0 device correctly.  Hence, it seems like you 
have a secondary / HPC-quality network available (which could be faster / more 
efficient than TCP), but it isn't configured properly in your environment.  You 
might want to investigate the suggestion from the help message to set the 
memlock limits correctly, and see if using the qib0 interfaces would yield 
better performance.

--
Jeff Squyres
jsquy...@cisco.com

From: users  on behalf of Gilles Gouaillardet 
via users 
Sent: Tuesday, November 29, 2022 3:36 AM
To: Gestió Servidors via users 
Cc: Gilles Gouaillardet 
Subject: Re: [OMPI users] Question about "mca" parameters

Hi,


Simply add


btl = tcp,self


If the openib error message persists, try also adding

osc_rdma_btls = ugni,uct,ucp

or simply

osc = ^rdma



Cheers,


Gilles

On 11/29/2022 5:16 PM, Gestió Servidors via users wrote:
>
> Hi,
>
> If I run “mpirun --mca btl tcp,self --mca allow_ib 0 -n 12
> ./my_program”, I get to disable some “extra” info in the output file like:
>
> The OpenFabrics (openib) BTL failed to initialize while trying to
>
> allocate some locked memory.  This typically can indicate that the
>
> memlock limits are set too low.  For most HPC installations, the
>
> memlock limits should be set to "unlimited".  The failure occured
>
> here:
>
> Local host:clus11
>
> OMPI source:   btl_openib.c:757
>
> Function:  opal_free_list_init()
>
> Device:qib0
>
> Memlock limit: 65536
>
> You may need to consult with your system administrator to get this
>
> problem fixed.  This FAQ entry on the Open MPI web site may also be
>
> helpful:
>
> http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
>
> --
>
> [clus11][[33029,1],0][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],1][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],9][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],8][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],2][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],6][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],10][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],11][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],5][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],3][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],4][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> [clus11][[33029,1],7][btl_openib.c:1062:mca_btl_openib_add_procs]
> could not prepare openib device for use
>
> or like
>
> By default, for Open MPI 4.0 and later, infiniband ports on a device
>
> are not used by default.  The intent is to use UCX for these devices.
>
> You can override this policy by setting the btl_openib_allow_ib MCA
> parameter
>
> to true.
>
> Local host:  clus11
>
> Local adapter:   qib0
>
> Local port:  1
>
> --
>
> --
>
> WARNING: There was an error initializing an OpenFabrics device.
>
> Local host:   clus11
>
> Local device: qib0
>
> 

Re: [OMPI users] Question about "mca" parameters

2022-11-29 Thread Gilles Gouaillardet via users

Hi,


Simply add


btl = tcp,self


If the openib error message persists, try also adding

osc_rdma_btls = ugni,uct,ucp

or simply

osc = ^rdma



Cheers,


Gilles

On 11/29/2022 5:16 PM, Gestió Servidors via users wrote:


Hi,

If I run “mpirun --mca btl tcp,self --mca allow_ib 0 -n 12 
./my_program”, I get to disable some “extra” info in the output file like:


The OpenFabrics (openib) BTL failed to initialize while trying to

allocate some locked memory.  This typically can indicate that the

memlock limits are set too low.  For most HPC installations, the

memlock limits should be set to "unlimited".  The failure occured

here:

Local host:    clus11

OMPI source:   btl_openib.c:757

Function:  opal_free_list_init()

Device:    qib0

Memlock limit: 65536

You may need to consult with your system administrator to get this

problem fixed.  This FAQ entry on the Open MPI web site may also be

helpful:

http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages

--

[clus11][[33029,1],0][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],1][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],9][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],8][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],2][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],6][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],10][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],11][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],5][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],3][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],4][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


[clus11][[33029,1],7][btl_openib.c:1062:mca_btl_openib_add_procs] 
could not prepare openib device for use


or like

By default, for Open MPI 4.0 and later, infiniband ports on a device

are not used by default.  The intent is to use UCX for these devices.

You can override this policy by setting the btl_openib_allow_ib MCA 
parameter


to true.

Local host:  clus11

Local adapter:   qib0

Local port:  1

--

--

WARNING: There was an error initializing an OpenFabrics device.

Local host:   clus11

Local device: qib0

--

so, now, I would like to force that parameters in file 
$OMPI/etc/openmpi-mca-params.conf. I have run “ompi_info --param all 
all --level 9” to get all parameters, but I don’t know exactly what 
parameters I need to add to $OMPI/etc/openmpi-mca-params.conf and what 
is the correcty syntax of them to force always “--mca btl tcp,self 
--mca allow_ib 0”. I have already added “btl_openib_allow_ib = “ and 
it works, but for parametes “--mca btl tcp,self”, what would be the 
correct syntax in $OMPI/etc/openmpi-mca-params.conf file?


Thanks!!





[OMPI users] Question about "mca" parameters

2022-11-29 Thread Gestió Servidors via users
Hi,

If I run "mpirun --mca btl tcp,self --mca allow_ib 0 -n 12 ./my_program", I get 
to disable some "extra" info in the output file like:

The OpenFabrics (openib) BTL failed to initialize while trying to
allocate some locked memory.  This typically can indicate that the
memlock limits are set too low.  For most HPC installations, the
memlock limits should be set to "unlimited".  The failure occured
here:

  Local host:clus11
  OMPI source:   btl_openib.c:757
  Function:  opal_free_list_init()
  Device:qib0
  Memlock limit: 65536

You may need to consult with your system administrator to get this
problem fixed.  This FAQ entry on the Open MPI web site may also be
helpful:

http://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages
--
[clus11][[33029,1],0][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],1][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],9][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],8][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],2][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],6][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],10][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],11][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],5][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],3][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],4][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use
[clus11][[33029,1],7][btl_openib.c:1062:mca_btl_openib_add_procs] could not 
prepare openib device for use

or like
By default, for Open MPI 4.0 and later, infiniband ports on a device
are not used by default.  The intent is to use UCX for these devices.
You can override this policy by setting the btl_openib_allow_ib MCA parameter
to true.

  Local host:  clus11
  Local adapter:   qib0
  Local port:  1

--
--
WARNING: There was an error initializing an OpenFabrics device.

  Local host:   clus11
  Local device: qib0
--

so, now, I would like to force that parameters in file 
$OMPI/etc/openmpi-mca-params.conf. I have run "ompi_info --param all all 
--level 9" to get all parameters, but I don't know exactly what parameters I 
need to add to $OMPI/etc/openmpi-mca-params.conf and what is the correcty 
syntax of them to force always "--mca btl tcp,self --mca allow_ib 0". I have 
already added "btl_openib_allow_ib = " and it works, but for parametes "--mca 
btl tcp,self", what would be the correct syntax in 
$OMPI/etc/openmpi-mca-params.conf file?

Thanks!!