I did some code-digging and I found the answer.
If the MCA parameter btl_openib_receive_queues is not spec'd on the mpirun
command line and not specified in
MPI_HOME/share/openmpi/mca-btl-openib-device-params.ini (via receive_queues
parameter), then OpenMPI derives the default setting from
Inquiring about how btl_openib_receive_queues actually gets its default
setting, since what I am seeing is not joving with documentation. We are using
OpenMPI 1.6.5, but I gather the version is moot.
Below is from ompi_info:
$ ompi_info --all | grep btl_openib_receive
MCA
Pardon if this has been addressed already, but I could not find the answer
after going through the OpenMPI FAQ and doing Google searches of the
open-mpi.org site.
We are in the process of analyzing and troubleshooting MPI jobs of increasingly
large scale (OpenMPI 1.6.5). At a sufficiently
I stumbled upon something while using 'ps -eFL' to view threads of processes,
and Google searches have failed to answer my question. This question holds for
OpenMPI 1.6.x and even OpenMPI 1.4.x.
For a program which is pure MPI (built and run using OpenMPI) and does not
implement Pthreads or
As far as I know, no MPI-IO is done in their LAM/MPI-based apps
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Rob Latham
Sent: Friday, February 27, 2015 11:22 AM
To: us...@open-mpi.org
Subject: Re: [OMPI users] LAM/MPI -> OpenMPI
On 02/27/2015 09:40
Unfortunately, we have a few apps which use LAM/MPI instead of OpenMPI (and
this is something I have NO control over). I have been making an effort to
try and convince those who handle such apps to move over to LAM/MPI as it is
(as I understand it) no longer supported and end-of-life. In
On RHEL 5 hosts which have OFED 1.5 installed, we have builds of OpenMPI 1.4.5
and 1.6.5 in place.
On RHEL 6 hosts we have OFED 2.4 installed. Will we need to rebuild OpenMPI
1.4.5 and 1.6.5, or can our existing builds of such still work on the RHEL 6
hosts?
--john
: [OMPI users] Determine IB transport type of OpenMPI job
Open MPI's openib BTL only supports RC transport.
Best,
Josh
Sent from my iPhone
On Jan 9, 2015, at 9:03 AM, "Sasso, John (GE Power & Water, Non-GE)"
<john1.sa...@ge.com<mailto:john1.sa...@ge.com>> wrote:
For a mu
For a multi-node job using OpenMPI 1.6.5 over InfiniBand where the OFED library
is used, is there a way to tell what IB transport type is being used (RC, UC,
UD, etc)?
---john
org> wrote:
> You might want to update to 1.6.5, if you can - I'll see what I can
> find
>
> On Jun 6, 2014, at 12:07 PM, Sasso, John (GE Power & Water, Non-GE)
> <john1.sa...@ge.com> wrote:
>
>> Version 1.6 (i.e. prior to 1.6.1)
>>
>> -Ori
node0001 and the last 8 being node0002),
>>>>> it appears that 24 MPI tasks try to start on node0001 instead of getting
>>>>> distributed as 16 on node0001 and 8 on node0002. Hence, I am
>>>>> curious what is being passed by PBS.
>>>>>
we will honor that directive
and keep all processes within that envelope.
>
> My two cents,
> Gus Correa
>
>
>> On Jun 6, 2014, at 10:01 AM, Reuti <re...@staff.uni-marburg.de
>> <mailto:re...@staff.uni-marburg.de>> wrote:
>>
>>> Am 06.06.2014 u
the tasks to the nodes.
I suppose we could do more integration in that regard, but haven't really seen
a reason to do so - the OMPI mappers are generally more flexible than anything
in the schedulers.
On Jun 6, 2014, at 9:08 AM, Sasso, John (GE Power & Water, Non-GE)
<john1.sa...
For the PBS scheduler and using a build of OpenMPI 1.6 built against PBS
include files + libs, is there a way to determine (perhaps via some debugging
flags passed to mpirun) what job placement parameters are passed from the PBS
scheduler to OpenMPI? In particular, I am talking about task
r
even IB.
On Apr 23, 2014, at 11:10 AM, "Sasso, John (GE Power & Water, Non-GE)"
<john1.sa...@ge.com> wrote:
> I am running IMB (Intel MPI Benchmarks), the MPI-1 benchmarks, which was
> built with Intel 12.1 compiler suite and OpenMPI 1.6.5 (and running w/ OMPI
>
I am running IMB (Intel MPI Benchmarks), the MPI-1 benchmarks, which was built
with Intel 12.1 compiler suite and OpenMPI 1.6.5 (and running w/ OMPI 1.6.5).
I decided to use the following for the mca parameters:
--mca btl openib,tcp,self --mca btl_openib_receive_queues
Dan,
On the hosts where the ADIOI lock error occurs, are there any NFS errors in
/var/log/messages, dmesg, or similar that refer to lockd?
--john
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Daniel Milroy
Sent: Tuesday, April 15, 2014 10:55 AM
To:
re detailed mapping info with --display-devel-map
Sent from my iPhone
> On Mar 27, 2014, at 2:58 PM, "Jeff Squyres (jsquyres)" <jsquy...@cisco.com>
> wrote:
>
>> On Mar 27, 2014, at 4:06 PM, "Sasso, John (GE Power & Water, Non-GE)"
>> <jo
t;>> and maybe also also:
>>>
>>> -bycore, -bysocket, -bind-to-core, -bind-to-socket, ...
>>>
>>> and similar, if you want more control on where your MPI processes run.
>>>
>>> "man mpiexec" is your friend!
>>>
>>
>> -report-bindings (this one should report what you want)
>>
>> and maybe also also:
>>
>> -bycore, -bysocket, -bind-to-core, -bind-to-socket, ...
>>
>> and similar, if you want more control on where your MPI processes run.
>>
>> "ma
nd-to-core, -bind-to-socket, ...
and similar, if you want more control on where your MPI processes run.
"man mpiexec" is your friend!
I hope this helps,
Gus Correa
On 03/27/2014 01:53 PM, Sasso, John (GE Power & Water, Non-GE) wrote:
> When a piece of software built agains
When a piece of software built against OpenMPI fails, I will see an error
referring to the rank of the MPI task which incurred the failure. For example:
MPI_ABORT was invoked on rank 1236 in communicator MPI_COMM_WORLD
with errorcode 1.
Unfortunately, I do not have access to the software code,
22 matches
Mail list logo