Yes! Indeed, the second is the efficient way to store my 2D-slice. 
So, following option 2:

1- I’ve grouped the processors with coord(3).EQ.1 from original_group:

        MPI_COMM_GROUP(MPI_COMM_WORLD, original_group)
        MPI_GROUP_INCL(original_group, nb_process_2D_SLICE, processes_2D_SLICE, 
2D_group,code)
        
2- I’ve created a MPI communicator for this group:

        MPI_COMM_CREAT(MPI_COMM_WORLD, my_group_2D, MPI_COMM_2D_SLICE, code)

—Problem—

When I do: 
        CALL MPI_COMM_RANK(MPI_COMM_2D_SLICE, 2D_ranks, code)
I’ve a segmentation fault… Did you have this problem?

—After solving this problem —

3 - You say that instead of MPI_COMM_WORLD I shall use MPI_COMM_2D_SLICE here?:
                
                comm = MPI_COMM_2D_SLICE
                h5pset_fapl_mpio_f(plist_id, comm, info)

Thank you a lot!
It was very helpful
Cheers,
Maria

On 10 Mar 2015, at 14:38, Angel de Vicente <[email protected]> wrote:

> Hi Maria,
> 
> Maria Castela <[email protected]> writes:
> 
>> Dear all,
>> When I use
>> CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
>> CALL h5pset_fapl_mpio_f(plist_id, comm, info, error)
>> I assume that all processors are called to write the solution.
>> However, I just want the processors which have coords(3) equals a certain 
>> value to write the solution… (constant z plane)
>> Figure below shows an example of what I want and what the program is doing.
>> How do I impose this condition? I have already tried 
>> « IF (coord(3) == 1) THEN
>> CALL h5pcreate_f(H5P_FILE_ACCESS_F, plist_id, error)
>> etc…
>> ENDIF «
>> However It doesn’t like it.  
> 
> We do need this in our code as well, and we have two different ways of
> doing it:
> 
> 1) if all processors are going to take part in the I/O operation, then
>   when you define the amount of data each processor is contributing,
>   this will be 0 for all processors except those with coord(3) .EQ. 1 
>   then all processors do exactly the same collective operations, except
>   that the amount of data they read/write will be different.
> 
> 2) you create a MPI communicator that groups those processors where
>   coord(3) .EQ. 1. Then, only those processors will contribute to the
>   I/O operation, which is similar to what your post seems to imply you
>   want to do, but then you have to make sure that the communicator in
>   your calls is not the global one (MPI_COMM_WORLD), but rather the one
>   you created specifially for coord(3) .EQ. 1
> 
> If this doesn't lead you very far, I can give you further details. 
> 
> Cheers,
> -- 
> Ángel de Vicente
> http://www.iac.es/galeria/angelv/          
> ---------------------------------------------------------------------------------------------
> ADVERTENCIA: Sobre la privacidad y cumplimiento de la Ley de Protección de 
> Datos, acceda a http://www.iac.es/disclaimer.php
> WARNING: For more information on privacy and fulfilment of the Law concerning 
> the Protection of Data, consult http://www.iac.es/disclaimer.php?lang=en
> 
> 
> _______________________________________________
> Hdf-forum is for HDF software users discussion.
> [email protected]
> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
> Twitter: https://twitter.com/hdf5


_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org
Twitter: https://twitter.com/hdf5

Reply via email to