at first, you can check the priorities of the various coll modules
with ompi_info

$ ompi_info --all | grep \"coll_ | grep priority
                MCA coll: parameter "coll_basic_priority" (current
value: "10", data source: default, level: 9 dev/all, type: int)
                MCA coll: parameter "coll_inter_priority" (current
value: "40", data source: default, level: 9 dev/all, type: int)
                MCA coll: parameter "coll_libnbc_priority" (current
value: "10", data source: default, level: 9 dev/all, type: int)
                MCA coll: parameter "coll_ml_priority" (current value:
"0", data source: default, level: 9 dev/all, type: int)
                MCA coll: parameter "coll_self_priority" (current
value: "75", data source: default, level: 9 dev/all, type: int)
                MCA coll: parameter "coll_sm_priority" (current value:
"0", data source: default, level: 9 dev/all, type: int)
                MCA coll: parameter "coll_tuned_priority" (current
value: "30", data source: default, level: 6 tuner/all, type: int)


coll_tuned_priority likely the collective module you will be using.
then you can check the various ompi_coll_tuned_*_intra_dec_fixed functions in
ompi/mca/coll/tuned/coll_tuned_decision_fixed.c
this is how the tuned collective module selects algorithms based on
communicator size and message size.

Cheers,

Gilles

On Sun, Oct 4, 2015 at 11:12 AM, Dahai Guo <dahaiguo2...@yahoo.com> wrote:
> Thanks, Jeff. I am trying to understand in detail how Open MPI works in the
> run time. What main functions does it call to select and initialize the coll
> components? Using the "helloworld" as an example,  how does it select and
> initialize the MPI_Barrier algorithm?  which C functions are involved and
> used in the process?
>
> Dahai
>
>
>
> On Friday, October 2, 2015 7:50 PM, Jeff Squyres (jsquyres)
> <jsquy...@cisco.com> wrote:
>
>
> On Oct 2, 2015, at 2:21 PM, Dahai Guo <dahaiguo2...@yahoo.com> wrote:
>>
>> Is there any way to trace open mpi internal function calls in a MPI user
>> program?
>
> Unfortunately, not easily -- other than using a debugger, for example.
>
>> If so, can any one explain it with an example? such as helloworld?  I
>> build open MPI with the VampirTrace options, and compile the following
>> program with picc-vt,. but I didn't get any tracing info.
>
> Open MPI is a giant state machine -- MPI_INIT, for example, invokes slightly
> fewer than a bazillion functions (e.g., it initializes every framework and
> many components/plugins).
>
> Is there something in particular that you're looking for / want to know
> about?
>
>> Thanks
>>
>> D. G.
>>
>> #include <stdio.h>
>> #include <mpi.h>
>>
>>
>> int main (int argc, char **argv)
>> {
>>  int rank, size;
>>
>>  MPI_Init (&argc, &argv);
>>  MPI_Comm_rank (MPI_COMM_WORLD, &rank);
>>  MPI_Comm_size (MPI_COMM_WORLD, &size);
>>  printf( "Hello world from process %d of %d\n", rank, size );
>>  MPI_Barrier(MPI_COMM_WORLD);
>>  MPI_Finalize();
>>  return 0;
>> }
>>
>> _______________________________________________
>> devel mailing list
>> de...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
>> Link to this post:
>> http://www.open-mpi.org/community/lists/devel/2015/10/18125.php
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
>
> _______________________________________________
> devel mailing list
> de...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/devel
> Link to this post:
> http://www.open-mpi.org/community/lists/devel/2015/10/18138.php

Reply via email to