Block length is 1, sorry for typo in my earlier mail, here is the code which
fails on NFS and PanFS.
block_length[0] = 1;
block_length[1] = 1;
block_length[2] = 1;
displacement[0] = 0;
displacement[1] = d[i].start * offset[i] * elmt_size;
displacement[2] = (MPI_Aint)elmt_size * max_xtent[i];
old_types[0] = MPI_LB;
old_types[1] = outer_type;
old_types[2] = MPI_UB;
#ifdef H5_HAVE_MPI2
mpi_code = MPI_Type_create_resized(outer_type, /* old types */
displacement[0], /* lower bound */
displacement[2], /* extent */
&inner_type); /* new type */
#else
mpi_code = MPI_Type_struct ( 3, /* count */
block_length, /* blocklengths */
displacement, /* displacements */
old_types, /* old types */
&inner_type); /* new type */
#endif
On Fri, Jul 30, 2010 at 11:12 AM, Saurabh Ranjan
<[email protected]>wrote:
> Even on Panasas I am getting same error with HP-MPI2.3.1.
>
>
> On Fri, Jul 30, 2010 at 11:05 AM, Saurabh Ranjan <[email protected]
> > wrote:
>
>> Thanks for the info. When I replace my code to
>> use MPI_Type_create_resized, I am getting error on NFS, maybe because of
>> ROMIO issue mentioned in the reply. But this is what I doing.
>>
>>
>> block_length[0] = 0;
>> block_length[1] = 0;
>> block_length[2] = 0;
>>
>> displacement[0] = 0;
>> displacement[1] = d[i].start * offset[i] * elmt_size;
>> displacement[2] = (MPI_Aint)elmt_size * max_xtent[i];
>>
>> old_types[0] = MPI_LB;
>> old_types[1] = outer_type;
>> old_types[2] = MPI_UB;
>>
>> #ifdef H5_HAVE_MPI2
>> mpi_code = MPI_Type_create_resized(outer_type, /* old types */
>> displacement[0], /* lower bound
>> */
>> displacement[2], /* extent */
>> &inner_type); /* new type */
>> #else
>> mpi_code = MPI_Type_struct ( 3, /* count */
>> block_length, /* blocklengths */
>> displacement, /* displacements */
>> old_types, /* old types */
>> &inner_type); /* new type */
>> #endif
>>
>> So when I turn on HAVE_MPI2, then I get "Error: Unsupported datatype
>> passed to ADIOI_Count_contiguous_blocks" while running . When I compile
>> without this flag, then it works fine.
>>
>> Is the usage for MPI_Type_create_resized fine in the above case?
>>
>> Thanks
>> Saurabh
>>
>> On Thu, May 13, 2010 at 8:22 PM, Rob Latham <[email protected]> wrote:
>>
>>> On Thu, May 13, 2010 at 12:04:58AM -0700, sranjan wrote:
>>> > Now while writing parallel collective IO with 2 nodes, it fails and the
>>> > stack is
>>> > MPI_Type_struct in H5S_mpio_hyper_type (H5Smpio.c) <-
>>> H5S_mpio_space_type
>>> > (H5Smpio.c) <- H5D_inter_collective_io (H5Dmpio.c) <-
>>> > H5D_contig_collective_write (H5Dmpio.c)
>>> >
>>> > Seems the failure is due to MPI_LB & MPI_UB (defined also in wrapper
>>> but
>>> > runtime call to this datatype constant in user selected mpi lib). MPI-2
>>> > guidelines say that "these are deprecated and their use is awkward &
>>> error
>>> > prone".
>>> >
>>> > And I am having a real hard time to figure out how to replace MPI_LB
>>> with
>>> > something appropriate.
>>>
>>> You can use MPI_Type_resized, but unfortunately a lot of ROMIO-based
>>> MPI-IO
>>> implementations won't understand that type and will give an error.
>>> That's slowly changing but it takes a while for ROMIO changes to
>>> propagate everywhere. Implementations based on MPICH2-1.0.8 and newer
>>> will understand that type.
>>>
>>> ==rob
>>>
>>> --
>>> Rob Latham
>>> Mathematics and Computer Science Division
>>> Argonne National Lab, IL USA
>>>
>>> _______________________________________________
>>> Hdf-forum is for HDF software users discussion.
>>> [email protected]
>>>
>>> http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org
>>>
>>
>>
>
_______________________________________________
Hdf-forum is for HDF software users discussion.
[email protected]
http://mail.hdfgroup.org/mailman/listinfo/hdf-forum_hdfgroup.org