Re: [OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all, Dear Gus, Dear George,
have you seen my example program? (in the attachment)
As you suggested I have tried to  *think **recursively about the datatypes*
but there is something wrong that I am not bale to understand, what do you
think?

thanks a lot

Diego


On 16 January 2015 at 23:23, Gus Correa  wrote:

> Hi George
>
> Many thanks for your answer and interest in my questions.
> ... so ... more questions inline ...
>
> On 01/16/2015 03:41 PM, George Bosilca wrote:
>
>> Gus,
>>
>> Please see my answers inline.
>>
>>  On Jan 16, 2015, at 14:24 , Gus Correa  wrote:
>>>
>>> Hi George
>>>
>>> It is still not clear to me how to deal with strides in
>>> MPI_Create_type_subarray.
>>> The function/subroutine interface doesn’t mention strides at all.
>>>
>>
>> That’s indeed a little tricky.
>> However, the trick here is that when you try to understand the subarray
>> type you should think recursively about the datatypes involved in the
>> operation.
>>
>>  It is a pity that there is little literature (books) about MPI,
>>> and the existing books are lagging behind the new MPI developments and
>>> standards (MPI-2, MPI-3).
>>> My most reliable sources so far were the "MPI - The complete reference"
>>> books, vol-1 (2nd ed.) and vol-2 (which presumably covers MPI-2).
>>> However, they do not even mention MPI_Create_type_subarray,
>>> which is part of the MPI-2 standard.
>>>
>>
>> Let me do a wild guess: you two guys must be the firsts to ask questions
>> about it …
>>
>>
> Did anybody but the MPI developers actually *used*
> MPI_Create_type_subarray?
> Could this explain the scarcity of questions about it?  :)
>
>  I found it in the MPI-2 standard on the web, but it is not clear to me
>>> how to achieve the same effect of strides that are available in
>>> MPI_Type_vector.
>>> MPI_Create_type_subarray is in section 4.1.3.
>>> The OMPI MPI_Create_type_subarray man page says there is an example in
>>> section 9.9.2 of the MPI-2 standard.
>>> However, there is no section 9.9.2.
>>> Chapter 9 is about something else ("The info object"), not derived types.
>>> No good example of MPI_Create_type_subarray in section 4.1.3 either,
>>> which is written in the typical terse and hermetic style in which
>>> standards are.
>>>
>>
>> No comments on subjective topics … ;)
>>
>
> It flows almost as smoothly as Backus-Naur prose.
> Makes a great reading with some Mozart in the background.
>
>  You just blew my day away, I was totally under the impression that the
>> MPI standard reads like a children’s bedtime story book !!!
>>
>>
>
> Did you write it?
> Do you read it for your kids at bed time?
> Do they fall asleep right away?
>
> Oh, if AEsop, Grimm Brothers, Charles Perrault, Andersen could only have
> helped as copy-desks ...
>
>
>  So, how can one handle strides in MPI_Create_type_subarray?
>>> Would one have to first create several MPI_Type_vector for the various
>>> dimensions, then use them as "oldtype" in  MPI_Create_type_subarray?
>>> That sounds awkward, because there is only one “oldtype" in
>>> MPI_Create_type_subarray, not one for each dimension.
>>>
>>
>> Exactly. Take a look at how we handle the subarray in Open MPI,
>> more precisely at the file ompi/datatype/ompi_datatype_create_subarray.c.
>> My comment from few days ago referred exactly to this code, where the
>> subarray is basically described in terms of vector
>> (well maybe vector as I was lazy to check the LB/UB).
>>
>>
> When documentation equates to reading actual function code ...
> ... that is when users drop trying to use new developments ...
>
> BTW, ominously a bug report on LB/UB misuse in MPI_Type_struct
> *and* in  MPI_Type_create_subarray ... gosh ...
>
> http://lists.mpich.org/pipermail/discuss/2015-January/003608.html
>
> But hopefully that doesn't affect Open MPI, right?
>
>  As I said above think recursively.
>> You start with the old type,
>> then build another try on a dimension,
>> then you use this to expose the second dimensions and so on.
>> For each dimension your basic type is not the user provided old type,
>> but the type you built so far.
>>
>> - size_array[i] is the number of elements in big data in the dimension i
>> - subsize_array[i] is the of element you will include in your datatype in
>> the dimension i
>> - start_array[i] is how many elements you will skip in the dimension i
>> before you start including them in your datatype. start[i] + subside[i]
>> must be smaller or equal to size[i]
>>
>>
> OK, that starts to make more sense than the yawning bedtime story
> in the MPI-2 standard.
>
> I should peel off (that would be recursive)
> or build up (that would be non-recursive, right?) each dimension,
> one by one,
> like an onion,
> creating new subarrays of increasing dimensions,
> one by one,
> based on the subarray previously created.
> Did I get it right?
> So, should I peel off or build up the dimensions?
>
> In which regard is this any better than using MPI_Type_Vector,
> which can be setup in a single non-recurs

Re: [OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Gus Correa

Hi George

Many thanks for your answer and interest in my questions.
... so ... more questions inline ...

On 01/16/2015 03:41 PM, George Bosilca wrote:

Gus,

Please see my answers inline.


On Jan 16, 2015, at 14:24 , Gus Correa  wrote:

Hi George

It is still not clear to me how to deal with strides in 
MPI_Create_type_subarray.
The function/subroutine interface doesn’t mention strides at all.


That’s indeed a little tricky.
However, the trick here is that when you try to understand the subarray
type you should think recursively about the datatypes involved in the operation.


It is a pity that there is little literature (books) about MPI,
and the existing books are lagging behind the new MPI developments and 
standards (MPI-2, MPI-3).
My most reliable sources so far were the "MPI - The complete reference" books, 
vol-1 (2nd ed.) and vol-2 (which presumably covers MPI-2).
However, they do not even mention MPI_Create_type_subarray,
which is part of the MPI-2 standard.


Let me do a wild guess: you two guys must be the firsts to ask questions about 
it …



Did anybody but the MPI developers actually *used* MPI_Create_type_subarray?
Could this explain the scarcity of questions about it?  :)


I found it in the MPI-2 standard on the web, but it is not clear to me
how to achieve the same effect of strides that are available in MPI_Type_vector.
MPI_Create_type_subarray is in section 4.1.3.
The OMPI MPI_Create_type_subarray man page says there is an example in section 
9.9.2 of the MPI-2 standard.
However, there is no section 9.9.2.
Chapter 9 is about something else ("The info object"), not derived types.
No good example of MPI_Create_type_subarray in section 4.1.3 either,
which is written in the typical terse and hermetic style in which
standards are.


No comments on subjective topics … ;)


It flows almost as smoothly as Backus-Naur prose.
Makes a great reading with some Mozart in the background.


You just blew my day away, I was totally under the impression that the
MPI standard reads like a children’s bedtime story book !!!




Did you write it?
Do you read it for your kids at bed time?
Do they fall asleep right away?

Oh, if AEsop, Grimm Brothers, Charles Perrault, Andersen could only have 
helped as copy-desks ...




So, how can one handle strides in MPI_Create_type_subarray?
Would one have to first create several MPI_Type_vector for the various dimensions, then 
use them as "oldtype" in  MPI_Create_type_subarray?
That sounds awkward, because there is only one “oldtype" in 
MPI_Create_type_subarray, not one for each dimension.


Exactly. Take a look at how we handle the subarray in Open MPI,
more precisely at the file ompi/datatype/ompi_datatype_create_subarray.c.
My comment from few days ago referred exactly to this code, where the
subarray is basically described in terms of vector
(well maybe vector as I was lazy to check the LB/UB).



When documentation equates to reading actual function code ...
... that is when users drop trying to use new developments ...

BTW, ominously a bug report on LB/UB misuse in MPI_Type_struct
*and* in  MPI_Type_create_subarray ... gosh ...

http://lists.mpich.org/pipermail/discuss/2015-January/003608.html

But hopefully that doesn't affect Open MPI, right?


As I said above think recursively.
You start with the old type,
then build another try on a dimension,
then you use this to expose the second dimensions and so on.
For each dimension your basic type is not the user provided old type,
but the type you built so far.

- size_array[i] is the number of elements in big data in the dimension i
- subsize_array[i] is the of element you will include in your datatype in the 
dimension i
- start_array[i] is how many elements you will skip in the dimension i before 
you start including them in your datatype. start[i] + subside[i] must be 
smaller or equal to size[i]



OK, that starts to make more sense than the yawning bedtime story
in the MPI-2 standard.

I should peel off (that would be recursive)
or build up (that would be non-recursive, right?) each dimension,
one by one,
like an onion,
creating new subarrays of increasing dimensions,
one by one,
based on the subarray previously created.
Did I get it right?
So, should I peel off or build up the dimensions?

In which regard is this any better than using MPI_Type_Vector,
which can be setup in a single non-recursive call,
as long as the sizes, strides, etc, are properly calculated?


Is there any simple example of how to achieve  stride effect with
MPI_Create_type_subarray in a multi-dimensional array?


Not as far as I know.
But now that people expressed interest in this topic,
maybe someone will write a blog or something about.



An example, just a simple example ...
... to help those that have to write all steps from 1 to N,
when it comes to thinking recursively ...
When it comes to recursion, I stopped at the Fibonacci numbers.

Well, even if it is on a blog ...
Nobody seem to care about books or printed matter anymor

Re: [OMPI users] Problem with connecting to 3 or more nodes

2015-01-16 Thread Jeff Squyres (jsquyres)
It's because Open MPI uses a tree-based ssh startup pattern.

(amusingly enough, I'm literally half way through writing up a blog entry about 
this exact same issue :-) )

That is, not only does Open MPI ssh from your mpirun-server to host1, Open MPI 
may also ssh from host1 to host2 (or host1 to host3).

In short, if you're not using a resource manager (such as Torque or SLURM), 
then you can't predict the ssh pattern, and you need 
passwordless/passphraseless ssh logins from each server to each other server.

Make sense?


> On Jan 16, 2015, at 3:29 PM, Chan, Elbert  wrote:
> 
> Hi
> 
> I'm hoping that someone will be able to help me figure out a problem with 
> connecting to multiple nodes with v1.8.4. 
> 
> Currently, I'm running into this issue:
> $ mpirun --host host1 hostname
> host1
> 
> $ mpirun --host host2,host3 hostname
> host2
> host3
> 
> Running this command on 1 or 2 nodes generates the expected result. However:
> $ mpirun --host host1,host2,host3 hostname
> Permission denied, please try again.
> Permission denied, please try again.
> Permission denied (publickey,password,keyboard-interactive).
> --
> ORTE was unable to reliably start one or more daemons.
> This usually is caused by:
> 
> * not finding the required libraries and/or binaries on
>  one or more nodes. Please check your PATH and LD_LIBRARY_PATH
>  settings, or configure OMPI with --enable-orterun-prefix-by-default
> 
> * lack of authority to execute on one or more specified nodes.
>  Please verify your allocation and authorities.
> 
> * the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
>  Please check with your sys admin to determine the correct location to use.
> 
> *  compilation of the orted with dynamic libraries when static are required
>  (e.g., on Cray). Please check your configure cmd line and consider using
>  one of the contrib/platform definitions for your system type.
> 
> * an inability to create a connection back to mpirun due to a
>  lack of common network interfaces and/or no route found between
>  them. Please check network connectivity (including firewalls
>  and network routing requirements).
> --
> 
> This is set up with passwordless logins with passphrases/ssh-agent. When I 
> run passphraseless, I get the expected result. 
> 
> What am I doing wrong? What can I look at to see where my problem could be?
> 
> Elbert
> 
> --
> 
> Elbert Chan
> Operating Systems Analyst
> College of ECC
> CSU, Chico
> 530-898-6481
> 
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/01/26207.php


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/



Re: [OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
Gus,

Please see my answers inline.

> On Jan 16, 2015, at 14:24 , Gus Correa  wrote:
> 
> Hi George
> 
> It is still not clear to me how to deal with strides in 
> MPI_Create_type_subarray.
> The function/subroutine interface doesn’t mention strides at all.

That’s indeed a little tricky. However, the trick here is that when you try to 
understand the subarray type you should think recursively about the datatypes 
involved in the operation.

> It is a pity that there is little literature (books) about MPI,
> and the existing books are lagging behind the new MPI developments and 
> standards (MPI-2, MPI-3).
> My most reliable sources so far were the "MPI - The complete reference" 
> books, vol-1 (2nd ed.) and vol-2 (which presumably covers MPI-2).
> However, they do not even mention MPI_Create_type_subarray,
> which is part of the MPI-2 standard.

Let me do a wild guess: you two guys must be the firsts to ask questions about 
it …

> I found it in the MPI-2 standard on the web, but it is not clear to me
> how to achieve the same effect of strides that are available in 
> MPI_Type_vector.
> MPI_Create_type_subarray is in section 4.1.3.
> The OMPI MPI_Create_type_subarray man page says there is an example in 
> section 9.9.2 of the MPI-2 standard.
> However, there is no section 9.9.2.
> Chapter 9 is about something else ("The info object"), not derived types.
> No good example of MPI_Create_type_subarray in section 4.1.3 either,
> which is written in the typical terse and hermetic style in which
> standards are.

No comments on subjective topics … ;) You just blew my day away, I was totally 
under the impression that the MPI standard reads like a children’s bedtime 
story book !!!

> So, how can one handle strides in MPI_Create_type_subarray?
> Would one have to first create several MPI_Type_vector for the various 
> dimensions, then use them as "oldtype" in  MPI_Create_type_subarray?
> That sounds awkward, because there is only one “oldtype" in 
> MPI_Create_type_subarray, not one for each dimension.

Exactly. Take a look at how we handle the subarray in Open MPI, more precisely 
at the file ompi/datatype/ompi_datatype_create_subarray.c. My comment from few 
days ago referred exactly to this code, where the subarray is basically 
described in terms of vector (well maybe vector as I was lazy to check the 
LB/UB).

As I said above think recursively. You start with the old type, then build 
another try on a dimension, then you use this to expose the second dimensions 
and so on. For each dimension your basic type is not the user provided old 
type, but the type you built so far.

- size_array[i] is the number of elements in big data in the dimension i
- subsize_array[i] is the of element you will include in your datatype in the 
dimension i
- start_array[i] is how many elements you will skip in the dimension i before 
you start including them in your datatype. start[i] + subside[i] must be 
smaller or equal to size[i]

> Is there any simple example of how to achieve  stride effect with
> MPI_Create_type_subarray in a multi-dimensional array?

Not as far as I know. But now that people expressed interest in this topic, 
maybe someone will write a blog or something about.

> BTW, when are you gentlemen going to write an updated version of the
> “MPI - The Complete Reference"?  :)

Maybe after the release of MPI 4.0 would be a good target … A lot of new and 
exciting technologies will hopefully be going in by then, writing a new book 
might be worth the effort.

  George.





> 
> Thank you,
> Gus Correa
> (Hijacking Diego Avesani's thread, apologies to Diego.)
> (Also, I know this question is not about Open MPI, but about MPI in general.  
> But the lack of examples may warrant asking the question here.)
> 
> 
> On 01/16/2015 01:39 AM, George Bosilca wrote:
>>  The subarray creation is an multi-dimension extension of the vector type.
> You can see it as a vector of vector of vector and so on, one vector per 
> dimension.
> The stride array is used to declare on each dimension what is the relative 
> displacement
> (in number of elements) from the beginning of the dimension array.
>> 
>> It is important to use regular type creation when you can take advantage of 
>> such
> regularity instead of resorting to use of struct or h*. This insure better
> packing/unpacking performance, as well as possible future support for 
> one-sided
> communications.
>> 
>> George.
>> 
>> 
>> 
>>> On Jan 15, 2015, at 19:31, Gus Correa  wrote:
>>> 
>>> I never used MPI_Type_create_subarray, only MPI_Type_Vector.
>>> What I like about MPI_Type_Vector is that you can define a stride,
>>> hence you can address any regular pattern in memory.
>>> However, it envisages the array layout in memory as a big 1-D array,
>>> with a linear index progressing in either Fortran or C order.
>>> 
>>> Somebody correct me please if I am wrong, but at first sight 
>>> MPI_Type_Vector sounds more flexible to me than MPI_Type_create_subarray, 
>>> exactly 

[OMPI users] Problem with connecting to 3 or more nodes

2015-01-16 Thread Chan, Elbert
Hi

I'm hoping that someone will be able to help me figure out a problem with 
connecting to multiple nodes with v1.8.4. 

Currently, I'm running into this issue:
$ mpirun --host host1 hostname
host1

$ mpirun --host host2,host3 hostname
host2
host3

Running this command on 1 or 2 nodes generates the expected result. However:
$ mpirun --host host1,host2,host3 hostname
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password,keyboard-interactive).
--
ORTE was unable to reliably start one or more daemons.
This usually is caused by:

* not finding the required libraries and/or binaries on
  one or more nodes. Please check your PATH and LD_LIBRARY_PATH
  settings, or configure OMPI with --enable-orterun-prefix-by-default

* lack of authority to execute on one or more specified nodes.
  Please verify your allocation and authorities.

* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
  Please check with your sys admin to determine the correct location to use.

*  compilation of the orted with dynamic libraries when static are required
  (e.g., on Cray). Please check your configure cmd line and consider using
  one of the contrib/platform definitions for your system type.

* an inability to create a connection back to mpirun due to a
  lack of common network interfaces and/or no route found between
  them. Please check network connectivity (including firewalls
  and network routing requirements).
--

This is set up with passwordless logins with passphrases/ssh-agent. When I run 
passphraseless, I get the expected result. 

What am I doing wrong? What can I look at to see where my problem could be?

Elbert

--

Elbert Chan
Operating Systems Analyst
College of ECC
CSU, Chico
530-898-6481



[OMPI users] How to handle strides in MPI_Create_type_subarray - Re: MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Gus Correa

Hi George

It is still not clear to me how to deal with strides in 
MPI_Create_type_subarray.

The function/subroutine interface doesn't mention strides at all.

It is a pity that there is little literature (books) about MPI,
and the existing books are lagging behind the new MPI developments and 
standards (MPI-2, MPI-3).
My most reliable sources so far were the "MPI - The complete reference" 
books, vol-1 (2nd ed.) and vol-2 (which presumably covers MPI-2).

However, they do not even mention MPI_Create_type_subarray,
which is part of the MPI-2 standard.

I found it in the MPI-2 standard on the web, but it is not clear to me
how to achieve the same effect of strides that are available in 
MPI_Type_vector.

MPI_Create_type_subarray is in section 4.1.3.
The OMPI MPI_Create_type_subarray man page says there is an example in 
section 9.9.2 of the MPI-2 standard.

However, there is no section 9.9.2.
Chapter 9 is about something else ("The info object"), not derived types.
No good example of MPI_Create_type_subarray in section 4.1.3 either,
which is written in the typical terse and hermetic style in which
standards are.

So, how can one handle strides in MPI_Create_type_subarray?
Would one have to first create several MPI_Type_vector for the various 
dimensions, then use them as "oldtype" in  MPI_Create_type_subarray?
That sounds awkward, because there is only one "oldtype" in 
MPI_Create_type_subarray, not one for each dimension.


Is there any simple example of how to achieve  stride effect with
MPI_Create_type_subarray in a multi-dimensional array?

BTW, when are you gentlemen going to write an updated version of the
"MPI - The Complete Reference"?  :)

Thank you,
Gus Correa
(Hijacking Diego Avesani's thread, apologies to Diego.)
(Also, I know this question is not about Open MPI, but about MPI in 
general.  But the lack of examples may warrant asking the question here.)



On 01/16/2015 01:39 AM, George Bosilca wrote:

  The subarray creation is an multi-dimension extension of the vector type.
You can see it as a vector of vector of vector and so on, one vector per 
dimension.
The stride array is used to declare on each dimension what is the 
relative displacement

(in number of elements) from the beginning of the dimension array.


It is important to use regular type creation when you can take advantage of such

regularity instead of resorting to use of struct or h*. This insure better
packing/unpacking performance, as well as possible future support for 
one-sided

communications.


George.




On Jan 15, 2015, at 19:31, Gus Correa  wrote:

I never used MPI_Type_create_subarray, only MPI_Type_Vector.
What I like about MPI_Type_Vector is that you can define a stride,
hence you can address any regular pattern in memory.
However, it envisages the array layout in memory as a big 1-D array,
with a linear index progressing in either Fortran or C order.

Somebody correct me please if I am wrong, but at first sight MPI_Type_Vector 
sounds more flexible to me than MPI_Type_create_subarray, exactly because the 
latter doesn't have strides.

The downside is that you need to do some index arithmetic to figure
the right strides, etc, to match the corresponding
Fortran90 array sections.

There are good examples in the "MPI - The complete reference" books I suggested 
to you before (actually in vol 1).

Online I could find the two man pages (good information, but no example):

http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_vector.3.php
http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_create_subarray.3.php

There is a very simple 2D example of MPI_Type_vector using strides here:

https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types

and a similar one here:

http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html

Gus Correa


On 01/15/2015 06:53 PM, Diego Avesani wrote:
dear George, dear Gus, dear all,
Could you please tell me where I can find a good example?
I am sorry but I can not understand the 3D array.


Really Thanks

Diego


On 15 January 2015 at 20:13, George Bosilca mailto:bosi...@icl.utk.edu>> wrote:



On Jan 15, 2015, at 06:02 , Diego Avesani mailto:diego.aves...@gmail.com>> wrote:

Dear Gus, Dear all,
Thanks a lot.
MPI_Type_Struct works well for the first part of my problem, so I
am very happy to be able to use it.

Regarding MPI_TYPE_VECTOR.

I have studied it and for simple case it is clear to me what id
does (at least I believe). Foe example if I have a matrix define as:
REAL, ALLOCATABLE (AA(:,:))
ALLOCATE AA(100,5)

I could send part of it defining

CALL MPI_TYPE_VECTOR(5,1,5,MPI_DOUBLE_PRECISION,/MY_NEW_TYPE/)

after that I can send part of it with

CALL MPI_SEND( AA(1:/10/,:), /10/, /MY_NEW_TYPE/, 1, 0,
MPI_COMM_WORLD );

Have I understood correctly?

What I can do in case of three dimensional array? for example
AA(:,:,:), I am looking to MPI_TYPE_CREATE_SUBARRAY.
Is that the correct way?

Thanks again


Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all,
here the 3D example, but unfortunately it does not work.
I believe that there is some problem with the stride.

What do you think?

Thanks again to everyone

Diego


On 16 January 2015 at 19:20, Diego Avesani  wrote:

> Dear All,
> in the attachment the 2D example, Now I will try the 3D example.
>
> What do you think of it? is it correct?
> The idea is to build a 2D data_type, to sent 3D data
>
> Diego
>
>
> On 16 January 2015 at 18:19, Diego Avesani 
> wrote:
>
>> Dear George, Dear All,
>>
>> and what do you think about the previous post?
>>
>> Thanks again
>>
>> Diego
>>
>>
>> On 16 January 2015 at 18:11, George Bosilca  wrote:
>>
>>> You could but you don’t need to. The datatype engine of Open MPI is
>>> doing a fair job of packing/unpacking the data on the flight, so you don’t
>>> have to.
>>>
>>>   George.
>>>
>>> On Jan 16, 2015, at 11:32 , Diego Avesani 
>>> wrote:
>>>
>>> Dear all,
>>>
>>> Could I use  MPI_PACK?
>>>
>>>
>>> Diego
>>>
>>>
>>> On 16 January 2015 at 16:26, Diego Avesani 
>>> wrote:
>>>
 Dear George, Dear all,

 I have been studying. It's clear for 2D case QQ(:,:).

 For example if
 real :: QQ(npt,9) , with 9 the characteristic of each particles.

 I can simple:

  call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)

 I send 50 element of QQ. I am in fortran so a two dimensional array is
 organized in a 1D array and a new row start after the 9 elements of a colum

 The problem is a 3D array. I belive that I have to create a sort of *vector
 of vectors.*
 More or less like:

 call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)

  and then

 call MPI_TYPE_VECTOR(xxx, xxx, xxx, *my_row*,  my_type, ierr).

 You can note that in the second case I have  *my_row *instead of
 mpi_real.

 I found somethind about it in a tutorial but I am not able to find it
 again in google. I think that is not convinient the use of struct in this
 case, I have only real. Moreover, mpi_struct is think to emulate
 Fortran90 and C structures, as Gus' suggestion.

 Let's me look to that tutorial
 What do you think?

 Thanks again






 Diego


 On 16 January 2015 at 16:02, George Bosilca 
 wrote:

> The operation you describe is a pack operation, agglomerating together
> in a contiguous buffer originally discontinuous elements. As a result 
> there
> is no need to use the MPI_TYPE_VECTOR, but instead you can just use the
> type you created so far (MPI_my_STRUCT) with a count.
>
>   George.
>
>
> On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani <
> diego.aves...@gmail.com> wrote:
>
>> Dear All,
>> I'm sorry to insist, but I am not able to understand. Moreover, I
>> have realized that I have to explain myself better.
>>
>> I try to explain in may program. Each CPU has *npt* particles. My
>> program understand how many particles each CPU has to send, according to
>> their positions. Then I can do:
>>
>> *icount=1*
>> * DO i=1,npt*
>> *IF(i is a particle to send)THEN*
>>
>> *DATASEND(icount)%ip = PART(ip)%ip*
>> *DATASEND(icount)%mc = PART(ip)%mc*
>>
>> *DATASEND(icount)%RP = PART(ip)%RP*
>> *DATASEND(icount)%QQ = PART(ip)%QQ*
>>
>> *icount=icount+1*
>> *ENDIF*
>> * ENDDO*
>>
>> After that, I can send *DATASEND*
>>
>> I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
>> the number of particles that I have to send:
>>
>> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
>>
>> This means that the number of particles which I have to send can
>> change every time.
>>
>> After that, I compute for each particles, somethins called
>> QQmls(:,:,:).
>> QQmls has all real elements. Now I would like to to the same that I
>> did with PART, but in this case:
>>
>> *icount=1*
>> *DO i=1,npt*
>> *IF(i is a particle to send)THEN*
>>
>>*DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
>> *  icount=icount+1*
>>
>> *ENDIF*
>> *ENDDO*
>>
>> I would like to have a sort  *MPI_my_TYPE to do that (like *
>> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *
>> because  *DATASEND_REAL *changes size every time.
>>
>> I hope to make myself clear.
>>
>> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?
>>
>> In the meantime, I will study some examples.
>>
>> Thanks again
>>
>>
>>
>>
>>
>> Diego
>>
>>
>> On 16 January 2015 at 07:39, George Bosilca 
>> wrote:
>>
>>>  The subarray creation is an multi-dimension extension of the vector
>>> type. You can see it as 

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear All,
in the attachment the 2D example, Now I will try the 3D example.

What do you think of it? is it correct?
The idea is to build a 2D data_type, to sent 3D data

Diego


On 16 January 2015 at 18:19, Diego Avesani  wrote:

> Dear George, Dear All,
>
> and what do you think about the previous post?
>
> Thanks again
>
> Diego
>
>
> On 16 January 2015 at 18:11, George Bosilca  wrote:
>
>> You could but you don’t need to. The datatype engine of Open MPI is doing
>> a fair job of packing/unpacking the data on the flight, so you don’t have
>> to.
>>
>>   George.
>>
>> On Jan 16, 2015, at 11:32 , Diego Avesani 
>> wrote:
>>
>> Dear all,
>>
>> Could I use  MPI_PACK?
>>
>>
>> Diego
>>
>>
>> On 16 January 2015 at 16:26, Diego Avesani 
>> wrote:
>>
>>> Dear George, Dear all,
>>>
>>> I have been studying. It's clear for 2D case QQ(:,:).
>>>
>>> For example if
>>> real :: QQ(npt,9) , with 9 the characteristic of each particles.
>>>
>>> I can simple:
>>>
>>>  call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)
>>>
>>> I send 50 element of QQ. I am in fortran so a two dimensional array is
>>> organized in a 1D array and a new row start after the 9 elements of a colum
>>>
>>> The problem is a 3D array. I belive that I have to create a sort of *vector
>>> of vectors.*
>>> More or less like:
>>>
>>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)
>>>
>>>  and then
>>>
>>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, *my_row*,  my_type, ierr).
>>>
>>> You can note that in the second case I have  *my_row *instead of
>>> mpi_real.
>>>
>>> I found somethind about it in a tutorial but I am not able to find it
>>> again in google. I think that is not convinient the use of struct in this
>>> case, I have only real. Moreover, mpi_struct is think to emulate
>>> Fortran90 and C structures, as Gus' suggestion.
>>>
>>> Let's me look to that tutorial
>>> What do you think?
>>>
>>> Thanks again
>>>
>>>
>>>
>>>
>>>
>>>
>>> Diego
>>>
>>>
>>> On 16 January 2015 at 16:02, George Bosilca  wrote:
>>>
 The operation you describe is a pack operation, agglomerating together
 in a contiguous buffer originally discontinuous elements. As a result there
 is no need to use the MPI_TYPE_VECTOR, but instead you can just use the
 type you created so far (MPI_my_STRUCT) with a count.

   George.


 On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani >>> > wrote:

> Dear All,
> I'm sorry to insist, but I am not able to understand. Moreover, I have
> realized that I have to explain myself better.
>
> I try to explain in may program. Each CPU has *npt* particles. My
> program understand how many particles each CPU has to send, according to
> their positions. Then I can do:
>
> *icount=1*
> * DO i=1,npt*
> *IF(i is a particle to send)THEN*
>
> *DATASEND(icount)%ip = PART(ip)%ip*
> *DATASEND(icount)%mc = PART(ip)%mc*
>
> *DATASEND(icount)%RP = PART(ip)%RP*
> *DATASEND(icount)%QQ = PART(ip)%QQ*
>
> *icount=icount+1*
> *ENDIF*
> * ENDDO*
>
> After that, I can send *DATASEND*
>
> I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
> the number of particles that I have to send:
>
> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
>
> This means that the number of particles which I have to send can
> change every time.
>
> After that, I compute for each particles, somethins called
> QQmls(:,:,:).
> QQmls has all real elements. Now I would like to to the same that I
> did with PART, but in this case:
>
> *icount=1*
> *DO i=1,npt*
> *IF(i is a particle to send)THEN*
>
>*DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
> *  icount=icount+1*
>
> *ENDIF*
> *ENDDO*
>
> I would like to have a sort  *MPI_my_TYPE to do that (like *
> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *
> because  *DATASEND_REAL *changes size every time.
>
> I hope to make myself clear.
>
> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?
>
> In the meantime, I will study some examples.
>
> Thanks again
>
>
>
>
>
> Diego
>
>
> On 16 January 2015 at 07:39, George Bosilca 
> wrote:
>
>>  The subarray creation is an multi-dimension extension of the vector
>> type. You can see it as a vector of vector of vector and so on, one 
>> vector
>> per dimension. The stride array is used to declare on each dimension what
>> is the relative displacement (in number of elements) from the beginning 
>> of
>> the dimension array.
>>
>> It is important to use regular type creation when you can take
>> advantage of such regularity instead of resorting to use of struct or h*.
>> This insure better pac

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear George, Dear All,

and what do you think about the previous post?

Thanks again

Diego


On 16 January 2015 at 18:11, George Bosilca  wrote:

> You could but you don’t need to. The datatype engine of Open MPI is doing
> a fair job of packing/unpacking the data on the flight, so you don’t have
> to.
>
>   George.
>
> On Jan 16, 2015, at 11:32 , Diego Avesani  wrote:
>
> Dear all,
>
> Could I use  MPI_PACK?
>
>
> Diego
>
>
> On 16 January 2015 at 16:26, Diego Avesani 
> wrote:
>
>> Dear George, Dear all,
>>
>> I have been studying. It's clear for 2D case QQ(:,:).
>>
>> For example if
>> real :: QQ(npt,9) , with 9 the characteristic of each particles.
>>
>> I can simple:
>>
>>  call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)
>>
>> I send 50 element of QQ. I am in fortran so a two dimensional array is
>> organized in a 1D array and a new row start after the 9 elements of a colum
>>
>> The problem is a 3D array. I belive that I have to create a sort of *vector
>> of vectors.*
>> More or less like:
>>
>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)
>>
>>  and then
>>
>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, *my_row*,  my_type, ierr).
>>
>> You can note that in the second case I have  *my_row *instead of
>> mpi_real.
>>
>> I found somethind about it in a tutorial but I am not able to find it
>> again in google. I think that is not convinient the use of struct in this
>> case, I have only real. Moreover, mpi_struct is think to emulate
>> Fortran90 and C structures, as Gus' suggestion.
>>
>> Let's me look to that tutorial
>> What do you think?
>>
>> Thanks again
>>
>>
>>
>>
>>
>>
>> Diego
>>
>>
>> On 16 January 2015 at 16:02, George Bosilca  wrote:
>>
>>> The operation you describe is a pack operation, agglomerating together
>>> in a contiguous buffer originally discontinuous elements. As a result there
>>> is no need to use the MPI_TYPE_VECTOR, but instead you can just use the
>>> type you created so far (MPI_my_STRUCT) with a count.
>>>
>>>   George.
>>>
>>>
>>> On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani 
>>>  wrote:
>>>
 Dear All,
 I'm sorry to insist, but I am not able to understand. Moreover, I have
 realized that I have to explain myself better.

 I try to explain in may program. Each CPU has *npt* particles. My
 program understand how many particles each CPU has to send, according to
 their positions. Then I can do:

 *icount=1*
 * DO i=1,npt*
 *IF(i is a particle to send)THEN*

 *DATASEND(icount)%ip = PART(ip)%ip*
 *DATASEND(icount)%mc = PART(ip)%mc*

 *DATASEND(icount)%RP = PART(ip)%RP*
 *DATASEND(icount)%QQ = PART(ip)%QQ*

 *icount=icount+1*
 *ENDIF*
 * ENDDO*

 After that, I can send *DATASEND*

 I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
 the number of particles that I have to send:

 TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV

 This means that the number of particles which I have to send can change
 every time.

 After that, I compute for each particles, somethins called QQmls(:,:,:).
 QQmls has all real elements. Now I would like to to the same that I did
 with PART, but in this case:

 *icount=1*
 *DO i=1,npt*
 *IF(i is a particle to send)THEN*

*DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
 *  icount=icount+1*

 *ENDIF*
 *ENDDO*

 I would like to have a sort  *MPI_my_TYPE to do that (like *
 *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *
 because  *DATASEND_REAL *changes size every time.

 I hope to make myself clear.

 So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?

 In the meantime, I will study some examples.

 Thanks again





 Diego


 On 16 January 2015 at 07:39, George Bosilca 
 wrote:

>  The subarray creation is an multi-dimension extension of the vector
> type. You can see it as a vector of vector of vector and so on, one vector
> per dimension. The stride array is used to declare on each dimension what
> is the relative displacement (in number of elements) from the beginning of
> the dimension array.
>
> It is important to use regular type creation when you can take
> advantage of such regularity instead of resorting to use of struct or h*.
> This insure better packing/unpacking performance, as well as possible
> future support for one-sided communications.
>
> George.
>
>
>
> > On Jan 15, 2015, at 19:31, Gus Correa  wrote:
> >
> > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
> > What I like about MPI_Type_Vector is that you can define a stride,
> > hence you can address any regular pattern in memory.
> > However, it 

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
You could but you don’t need to. The datatype engine of Open MPI is doing a 
fair job of packing/unpacking the data on the flight, so you don’t have to.

  George.

> On Jan 16, 2015, at 11:32 , Diego Avesani  wrote:
> 
> Dear all,
> 
> Could I use  MPI_PACK?
> 
> 
> Diego
> 
> 
> On 16 January 2015 at 16:26, Diego Avesani  > wrote:
> Dear George, Dear all,
> 
> I have been studying. It's clear for 2D case QQ(:,:).
> 
> For example if 
> real :: QQ(npt,9) , with 9 the characteristic of each particles.
> 
> I can simple:
> 
>  call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)
> 
> I send 50 element of QQ. I am in fortran so a two dimensional array is 
> organized in a 1D array and a new row start after the 9 elements of a colum
> 
> The problem is a 3D array. I belive that I have to create a sort of vector of 
> vectors.
> More or less like:
> 
> call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)
> 
>  and then 
> 
> call MPI_TYPE_VECTOR(xxx, xxx, xxx, my_row,  my_type, ierr).
> 
> You can note that in the second case I have  my_row instead of mpi_real.  
> 
> I found somethind about it in a tutorial but I am not able to find it again 
> in google. I think that is not convinient the use of struct in this case, I 
> have only real. Moreover, mpi_struct is think to emulate Fortran90 and C 
> structures, as Gus' suggestion.
> 
> Let's me look to that tutorial
> What do you think?
> 
> Thanks again
> 
> 
> 
> 
> 
> 
> Diego
> 
> 
> On 16 January 2015 at 16:02, George Bosilca  > wrote:
> The operation you describe is a pack operation, agglomerating together in a 
> contiguous buffer originally discontinuous elements. As a result there is no 
> need to use the MPI_TYPE_VECTOR, but instead you can just use the type you 
> created so far (MPI_my_STRUCT) with a count.
> 
>   George.
> 
> 
> On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani  > wrote:
> Dear All, 
> I'm sorry to insist, but I am not able to understand. Moreover, I have 
> realized that I have to explain myself better.
> 
> I try to explain in may program. Each CPU has npt particles. My program 
> understand how many particles each CPU has to send, according to their 
> positions. Then I can do:
> 
> icount=1
>  DO i=1,npt
> IF(i is a particle to send)THEN
> 
> DATASEND(icount)%ip = PART(ip)%ip
> DATASEND(icount)%mc = PART(ip)%mc
>  
> DATASEND(icount)%RP = PART(ip)%RP
> DATASEND(icount)%QQ = PART(ip)%QQ
> 
> icount=icount+1
> ENDIF
>  ENDDO
> 
> After that, I can send DATASEND
> 
> I DATASEND is a   MPI_my_STRUCT. I can allocate it according to the number of 
> particles that I have to send:
> 
> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
> 
> This means that the number of particles which I have to send can change every 
> time.
> 
> After that, I compute for each particles, somethins called QQmls(:,:,:).
> QQmls has all real elements. Now I would like to to the same that I did with 
> PART, but in this case:
> 
> icount=1
> DO i=1,npt
> IF(i is a particle to send)THEN
>
>DATASEND_REAL(:,icount,:)=QQmls(:,i,:)
>   icount=icount+1
> 
> ENDIF
> ENDDO
> 
> I would like to have a sort  MPI_my_TYPE to do that (like   MPI_my_STRUCT) 
> and not to create every time MPI_TYPE_VECTOR because  DATASEND_REAL changes 
> size every time.
> 
> I hope to make myself clear.
> 
> So is it correct to use MPI_TYPE_VECTOR?, Can I do what I want?
> 
> In the meantime, I will study some examples.
> 
> Thanks again
> 
>  
> 
> 
> 
> Diego
> 
> 
> On 16 January 2015 at 07:39, George Bosilca  > wrote:
>  The subarray creation is an multi-dimension extension of the vector type. 
> You can see it as a vector of vector of vector and so on, one vector per 
> dimension. The stride array is used to declare on each dimension what is the 
> relative displacement (in number of elements) from the beginning of the 
> dimension array.
> 
> It is important to use regular type creation when you can take advantage of 
> such regularity instead of resorting to use of struct or h*. This insure 
> better packing/unpacking performance, as well as possible future support for 
> one-sided communications.
> 
> George.
> 
> 
> 
> > On Jan 15, 2015, at 19:31, Gus Correa  > > wrote:
> >
> > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
> > What I like about MPI_Type_Vector is that you can define a stride,
> > hence you can address any regular pattern in memory.
> > However, it envisages the array layout in memory as a big 1-D array,
> > with a linear index progressing in either Fortran or C order.
> >
> > Somebody correct me please if I am wrong, but at first sight 
> > MPI_Type_Vector sounds more flexible to me than MPI_Type_create_subarray, 
> > exactly because the latter doesn't have strides.
>

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear all,

Could I use  MPI_PACK?


Diego


On 16 January 2015 at 16:26, Diego Avesani  wrote:

> Dear George, Dear all,
>
> I have been studying. It's clear for 2D case QQ(:,:).
>
> For example if
> real :: QQ(npt,9) , with 9 the characteristic of each particles.
>
> I can simple:
>
>  call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)
>
> I send 50 element of QQ. I am in fortran so a two dimensional array is
> organized in a 1D array and a new row start after the 9 elements of a colum
>
> The problem is a 3D array. I belive that I have to create a sort of *vector
> of vectors.*
> More or less like:
>
> call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)
>
>  and then
>
> call MPI_TYPE_VECTOR(xxx, xxx, xxx, *my_row*,  my_type, ierr).
>
> You can note that in the second case I have  *my_row *instead of
> mpi_real.
>
> I found somethind about it in a tutorial but I am not able to find it
> again in google. I think that is not convinient the use of struct in this
> case, I have only real. Moreover, mpi_struct is think to emulate
> Fortran90 and C structures, as Gus' suggestion.
>
> Let's me look to that tutorial
> What do you think?
>
> Thanks again
>
>
>
>
>
>
> Diego
>
>
> On 16 January 2015 at 16:02, George Bosilca  wrote:
>
>> The operation you describe is a pack operation, agglomerating together in
>> a contiguous buffer originally discontinuous elements. As a result there is
>> no need to use the MPI_TYPE_VECTOR, but instead you can just use the type
>> you created so far (MPI_my_STRUCT) with a count.
>>
>>   George.
>>
>>
>> On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani 
>> wrote:
>>
>>> Dear All,
>>> I'm sorry to insist, but I am not able to understand. Moreover, I have
>>> realized that I have to explain myself better.
>>>
>>> I try to explain in may program. Each CPU has *npt* particles. My
>>> program understand how many particles each CPU has to send, according to
>>> their positions. Then I can do:
>>>
>>> *icount=1*
>>> * DO i=1,npt*
>>> *IF(i is a particle to send)THEN*
>>>
>>> *DATASEND(icount)%ip = PART(ip)%ip*
>>> *DATASEND(icount)%mc = PART(ip)%mc*
>>>
>>> *DATASEND(icount)%RP = PART(ip)%RP*
>>> *DATASEND(icount)%QQ = PART(ip)%QQ*
>>>
>>> *icount=icount+1*
>>> *ENDIF*
>>> * ENDDO*
>>>
>>> After that, I can send *DATASEND*
>>>
>>> I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
>>> the number of particles that I have to send:
>>>
>>> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
>>>
>>> This means that the number of particles which I have to send can change
>>> every time.
>>>
>>> After that, I compute for each particles, somethins called QQmls(:,:,:).
>>> QQmls has all real elements. Now I would like to to the same that I did
>>> with PART, but in this case:
>>>
>>> *icount=1*
>>> *DO i=1,npt*
>>> *IF(i is a particle to send)THEN*
>>>
>>>*DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
>>> *  icount=icount+1*
>>>
>>> *ENDIF*
>>> *ENDDO*
>>>
>>> I would like to have a sort  *MPI_my_TYPE to do that (like *
>>> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *
>>> because  *DATASEND_REAL *changes size every time.
>>>
>>> I hope to make myself clear.
>>>
>>> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?
>>>
>>> In the meantime, I will study some examples.
>>>
>>> Thanks again
>>>
>>>
>>>
>>>
>>>
>>> Diego
>>>
>>>
>>> On 16 January 2015 at 07:39, George Bosilca  wrote:
>>>
  The subarray creation is an multi-dimension extension of the vector
 type. You can see it as a vector of vector of vector and so on, one vector
 per dimension. The stride array is used to declare on each dimension what
 is the relative displacement (in number of elements) from the beginning of
 the dimension array.

 It is important to use regular type creation when you can take
 advantage of such regularity instead of resorting to use of struct or h*.
 This insure better packing/unpacking performance, as well as possible
 future support for one-sided communications.

 George.



 > On Jan 15, 2015, at 19:31, Gus Correa  wrote:
 >
 > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
 > What I like about MPI_Type_Vector is that you can define a stride,
 > hence you can address any regular pattern in memory.
 > However, it envisages the array layout in memory as a big 1-D array,
 > with a linear index progressing in either Fortran or C order.
 >
 > Somebody correct me please if I am wrong, but at first sight
 MPI_Type_Vector sounds more flexible to me than MPI_Type_create_subarray,
 exactly because the latter doesn't have strides.
 >
 > The downside is that you need to do some index arithmetic to figure
 > the right strides, etc, to match the corresponding
 > Fortran90 array sections.
 >
 > There are good example

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear George, Dear all,

I have been studying. It's clear for 2D case QQ(:,:).

For example if
real :: QQ(npt,9) , with 9 the characteristic of each particles.

I can simple:

 call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)

I send 50 element of QQ. I am in fortran so a two dimensional array is
organized in a 1D array and a new row start after the 9 elements of a colum

The problem is a 3D array. I belive that I have to create a sort of *vector
of vectors.*
More or less like:

call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)

 and then

call MPI_TYPE_VECTOR(xxx, xxx, xxx, *my_row*,  my_type, ierr).

You can note that in the second case I have  *my_row *instead of mpi_real.

I found somethind about it in a tutorial but I am not able to find it again
in google. I think that is not convinient the use of struct in this case, I
have only real. Moreover, mpi_struct is think to emulate Fortran90 and C
structures, as Gus' suggestion.

Let's me look to that tutorial
What do you think?

Thanks again






Diego


On 16 January 2015 at 16:02, George Bosilca  wrote:

> The operation you describe is a pack operation, agglomerating together in
> a contiguous buffer originally discontinuous elements. As a result there is
> no need to use the MPI_TYPE_VECTOR, but instead you can just use the type
> you created so far (MPI_my_STRUCT) with a count.
>
>   George.
>
>
> On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani 
> wrote:
>
>> Dear All,
>> I'm sorry to insist, but I am not able to understand. Moreover, I have
>> realized that I have to explain myself better.
>>
>> I try to explain in may program. Each CPU has *npt* particles. My
>> program understand how many particles each CPU has to send, according to
>> their positions. Then I can do:
>>
>> *icount=1*
>> * DO i=1,npt*
>> *IF(i is a particle to send)THEN*
>>
>> *DATASEND(icount)%ip = PART(ip)%ip*
>> *DATASEND(icount)%mc = PART(ip)%mc*
>>
>> *DATASEND(icount)%RP = PART(ip)%RP*
>> *DATASEND(icount)%QQ = PART(ip)%QQ*
>>
>> *icount=icount+1*
>> *ENDIF*
>> * ENDDO*
>>
>> After that, I can send *DATASEND*
>>
>> I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
>> the number of particles that I have to send:
>>
>> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
>>
>> This means that the number of particles which I have to send can change
>> every time.
>>
>> After that, I compute for each particles, somethins called QQmls(:,:,:).
>> QQmls has all real elements. Now I would like to to the same that I did
>> with PART, but in this case:
>>
>> *icount=1*
>> *DO i=1,npt*
>> *IF(i is a particle to send)THEN*
>>
>>*DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
>> *  icount=icount+1*
>>
>> *ENDIF*
>> *ENDDO*
>>
>> I would like to have a sort  *MPI_my_TYPE to do that (like *
>> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *because
>>   *DATASEND_REAL *changes size every time.
>>
>> I hope to make myself clear.
>>
>> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?
>>
>> In the meantime, I will study some examples.
>>
>> Thanks again
>>
>>
>>
>>
>>
>> Diego
>>
>>
>> On 16 January 2015 at 07:39, George Bosilca  wrote:
>>
>>>  The subarray creation is an multi-dimension extension of the vector
>>> type. You can see it as a vector of vector of vector and so on, one vector
>>> per dimension. The stride array is used to declare on each dimension what
>>> is the relative displacement (in number of elements) from the beginning of
>>> the dimension array.
>>>
>>> It is important to use regular type creation when you can take advantage
>>> of such regularity instead of resorting to use of struct or h*. This insure
>>> better packing/unpacking performance, as well as possible future support
>>> for one-sided communications.
>>>
>>> George.
>>>
>>>
>>>
>>> > On Jan 15, 2015, at 19:31, Gus Correa  wrote:
>>> >
>>> > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
>>> > What I like about MPI_Type_Vector is that you can define a stride,
>>> > hence you can address any regular pattern in memory.
>>> > However, it envisages the array layout in memory as a big 1-D array,
>>> > with a linear index progressing in either Fortran or C order.
>>> >
>>> > Somebody correct me please if I am wrong, but at first sight
>>> MPI_Type_Vector sounds more flexible to me than MPI_Type_create_subarray,
>>> exactly because the latter doesn't have strides.
>>> >
>>> > The downside is that you need to do some index arithmetic to figure
>>> > the right strides, etc, to match the corresponding
>>> > Fortran90 array sections.
>>> >
>>> > There are good examples in the "MPI - The complete reference" books I
>>> suggested to you before (actually in vol 1).
>>> >
>>> > Online I could find the two man pages (good information, but no
>>> example):
>>> >
>>> > http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_vector.3.php
>>> > http://www.op

Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
The operation you describe is a pack operation, agglomerating together in a
contiguous buffer originally discontinuous elements. As a result there is
no need to use the MPI_TYPE_VECTOR, but instead you can just use the type
you created so far (MPI_my_STRUCT) with a count.

  George.


On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani 
wrote:

> Dear All,
> I'm sorry to insist, but I am not able to understand. Moreover, I have
> realized that I have to explain myself better.
>
> I try to explain in may program. Each CPU has *npt* particles. My program
> understand how many particles each CPU has to send, according to their
> positions. Then I can do:
>
> *icount=1*
> * DO i=1,npt*
> *IF(i is a particle to send)THEN*
>
> *DATASEND(icount)%ip = PART(ip)%ip*
> *DATASEND(icount)%mc = PART(ip)%mc*
>
> *DATASEND(icount)%RP = PART(ip)%RP*
> *DATASEND(icount)%QQ = PART(ip)%QQ*
>
> *icount=icount+1*
> *ENDIF*
> * ENDDO*
>
> After that, I can send *DATASEND*
>
> I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
> the number of particles that I have to send:
>
> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
>
> This means that the number of particles which I have to send can change
> every time.
>
> After that, I compute for each particles, somethins called QQmls(:,:,:).
> QQmls has all real elements. Now I would like to to the same that I did
> with PART, but in this case:
>
> *icount=1*
> *DO i=1,npt*
> *IF(i is a particle to send)THEN*
>
>*DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
> *  icount=icount+1*
>
> *ENDIF*
> *ENDDO*
>
> I would like to have a sort  *MPI_my_TYPE to do that (like *
> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *because
>   *DATASEND_REAL *changes size every time.
>
> I hope to make myself clear.
>
> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?
>
> In the meantime, I will study some examples.
>
> Thanks again
>
>
>
>
>
> Diego
>
>
> On 16 January 2015 at 07:39, George Bosilca  wrote:
>
>>  The subarray creation is an multi-dimension extension of the vector
>> type. You can see it as a vector of vector of vector and so on, one vector
>> per dimension. The stride array is used to declare on each dimension what
>> is the relative displacement (in number of elements) from the beginning of
>> the dimension array.
>>
>> It is important to use regular type creation when you can take advantage
>> of such regularity instead of resorting to use of struct or h*. This insure
>> better packing/unpacking performance, as well as possible future support
>> for one-sided communications.
>>
>> George.
>>
>>
>>
>> > On Jan 15, 2015, at 19:31, Gus Correa  wrote:
>> >
>> > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
>> > What I like about MPI_Type_Vector is that you can define a stride,
>> > hence you can address any regular pattern in memory.
>> > However, it envisages the array layout in memory as a big 1-D array,
>> > with a linear index progressing in either Fortran or C order.
>> >
>> > Somebody correct me please if I am wrong, but at first sight
>> MPI_Type_Vector sounds more flexible to me than MPI_Type_create_subarray,
>> exactly because the latter doesn't have strides.
>> >
>> > The downside is that you need to do some index arithmetic to figure
>> > the right strides, etc, to match the corresponding
>> > Fortran90 array sections.
>> >
>> > There are good examples in the "MPI - The complete reference" books I
>> suggested to you before (actually in vol 1).
>> >
>> > Online I could find the two man pages (good information, but no
>> example):
>> >
>> > http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_vector.3.php
>> > http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_create_subarray.3.php
>> >
>> > There is a very simple 2D example of MPI_Type_vector using strides here:
>> >
>> > https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types
>> >
>> > and a similar one here:
>> >
>> > http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html
>> >
>> > Gus Correa
>> >
>> >> On 01/15/2015 06:53 PM, Diego Avesani wrote:
>> >> dear George, dear Gus, dear all,
>> >> Could you please tell me where I can find a good example?
>> >> I am sorry but I can not understand the 3D array.
>> >>
>> >>
>> >> Really Thanks
>> >>
>> >> Diego
>> >>
>> >>
>> >> On 15 January 2015 at 20:13, George Bosilca > >> > wrote:
>> >>
>> >>
>> >>>On Jan 15, 2015, at 06:02 , Diego Avesani > >>>> wrote:
>> >>>
>> >>>Dear Gus, Dear all,
>> >>>Thanks a lot.
>> >>>MPI_Type_Struct works well for the first part of my problem, so I
>> >>>am very happy to be able to use it.
>> >>>
>> >>>Regarding MPI_TYPE_VECTOR.
>> >>>
>> >>>I have studied it and for simple case it is clear to me what id
>> >>>does (at least I believe). Foe example if I have a matrix define
>> as:
>> >>>REA

Re: [OMPI users] OpenMPI 1.8.4rc3, 1.6.5 and 1.6.3: segmentation violation in mca_io_romio_dist_MPI_File_close

2015-01-16 Thread Eric Chamberland


On 01/14/2015 05:57 PM, Rob Latham wrote:



On 12/17/2014 07:04 PM, Eric Chamberland wrote:

Hi!

Here is a "poor man's fix" that works for me (the idea is not from me,
thanks to Thomas H.):

#1- char* lCwd = getcwd(0,0);
#2- chdir(lPathToFile);
#3- MPI_File_open(...,lFileNameWithoutTooLongPath,...);
#4- chdir(lCwd);
#5- ...

I think there are some limitations but it works very well for our
uses... and until a "real" fix is proposed...


Thanks for the bug report and test cases.  I just pushed two fixes for
master that fix the problem you were seeing:

http://git.mpich.org/mpich.git/commit/ed39c901
http://git.mpich.org/mpich.git/commit/a30a4721a2

==rob



Great!  Thank you for the follow up (and both messages)!

Eric



Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread Diego Avesani
Dear All,
I'm sorry to insist, but I am not able to understand. Moreover, I have
realized that I have to explain myself better.

I try to explain in may program. Each CPU has *npt* particles. My program
understand how many particles each CPU has to send, according to their
positions. Then I can do:

*icount=1*
* DO i=1,npt*
*IF(i is a particle to send)THEN*

*DATASEND(icount)%ip = PART(ip)%ip*
*DATASEND(icount)%mc = PART(ip)%mc*

*DATASEND(icount)%RP = PART(ip)%RP*
*DATASEND(icount)%QQ = PART(ip)%QQ*

*icount=icount+1*
*ENDIF*
* ENDDO*

After that, I can send *DATASEND*

I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
the number of particles that I have to send:

TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV

This means that the number of particles which I have to send can change
every time.

After that, I compute for each particles, somethins called QQmls(:,:,:).
QQmls has all real elements. Now I would like to to the same that I did
with PART, but in this case:

*icount=1*
*DO i=1,npt*
*IF(i is a particle to send)THEN*

   *DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
*  icount=icount+1*

*ENDIF*
*ENDDO*

I would like to have a sort  *MPI_my_TYPE to do that (like *
*MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR
*because  *DATASEND_REAL
*changes size every time.

I hope to make myself clear.

So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?

In the meantime, I will study some examples.

Thanks again





Diego


On 16 January 2015 at 07:39, George Bosilca  wrote:

>  The subarray creation is an multi-dimension extension of the vector type.
> You can see it as a vector of vector of vector and so on, one vector per
> dimension. The stride array is used to declare on each dimension what is
> the relative displacement (in number of elements) from the beginning of the
> dimension array.
>
> It is important to use regular type creation when you can take advantage
> of such regularity instead of resorting to use of struct or h*. This insure
> better packing/unpacking performance, as well as possible future support
> for one-sided communications.
>
> George.
>
>
>
> > On Jan 15, 2015, at 19:31, Gus Correa  wrote:
> >
> > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
> > What I like about MPI_Type_Vector is that you can define a stride,
> > hence you can address any regular pattern in memory.
> > However, it envisages the array layout in memory as a big 1-D array,
> > with a linear index progressing in either Fortran or C order.
> >
> > Somebody correct me please if I am wrong, but at first sight
> MPI_Type_Vector sounds more flexible to me than MPI_Type_create_subarray,
> exactly because the latter doesn't have strides.
> >
> > The downside is that you need to do some index arithmetic to figure
> > the right strides, etc, to match the corresponding
> > Fortran90 array sections.
> >
> > There are good examples in the "MPI - The complete reference" books I
> suggested to you before (actually in vol 1).
> >
> > Online I could find the two man pages (good information, but no example):
> >
> > http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_vector.3.php
> > http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_create_subarray.3.php
> >
> > There is a very simple 2D example of MPI_Type_vector using strides here:
> >
> > https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types
> >
> > and a similar one here:
> >
> > http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html
> >
> > Gus Correa
> >
> >> On 01/15/2015 06:53 PM, Diego Avesani wrote:
> >> dear George, dear Gus, dear all,
> >> Could you please tell me where I can find a good example?
> >> I am sorry but I can not understand the 3D array.
> >>
> >>
> >> Really Thanks
> >>
> >> Diego
> >>
> >>
> >> On 15 January 2015 at 20:13, George Bosilca  >> > wrote:
> >>
> >>
> >>>On Jan 15, 2015, at 06:02 , Diego Avesani  >>>> wrote:
> >>>
> >>>Dear Gus, Dear all,
> >>>Thanks a lot.
> >>>MPI_Type_Struct works well for the first part of my problem, so I
> >>>am very happy to be able to use it.
> >>>
> >>>Regarding MPI_TYPE_VECTOR.
> >>>
> >>>I have studied it and for simple case it is clear to me what id
> >>>does (at least I believe). Foe example if I have a matrix define as:
> >>>REAL, ALLOCATABLE (AA(:,:))
> >>>ALLOCATE AA(100,5)
> >>>
> >>>I could send part of it defining
> >>>
> >>>CALL MPI_TYPE_VECTOR(5,1,5,MPI_DOUBLE_PRECISION,/MY_NEW_TYPE/)
> >>>
> >>>after that I can send part of it with
> >>>
> >>>CALL MPI_SEND( AA(1:/10/,:), /10/, /MY_NEW_TYPE/, 1, 0,
> >>>MPI_COMM_WORLD );
> >>>
> >>>Have I understood correctly?
> >>>
> >>>What I can do in case of three dimensional array? for example
> >>>AA(:,:,:), I am looking to MPI_TYPE_CREATE_SUBARRAY.
> >>>Is that the 

Re: [OMPI users] libevent hangs on app finalize stage

2015-01-16 Thread Leonid

Yes, it works now.

Thanks for operative support.

On 15.01.2015 21:50, Ralph Castain wrote:

Fixed - sorry about that!



On Jan 15, 2015, at 10:39 AM, Ralph Castain  wrote:

Ah, indeed - I found the problem. Fix coming momentarily


On Jan 15, 2015, at 10:31 AM, Ralph Castain  wrote:

Hmmm…I’m not seeing a failure. Let me try on another system.


Modifying libevent is not a viable solution :-(



On Jan 15, 2015, at 10:26 AM, Leonid  wrote:

Hi Ralph.

Of course that may indicate an issue with custom compiler, but given that it 
fails with gcc and inserted delay I still think it is a OMPI bug, since such a 
delay could be caused by operating system at that exact point.

For me simply commenting out "base->event_gotterm = base->event_break = 0;" 
seems to do the trick, but I am not completely sure if that won't cause any other troubles.

I've tried to update my master branch to the latest version (including your 
fix) but now it just crashes for me on *all* benchmarks that I am trying (both 
with gcc and our compiler).

On 15.01.2015 18:57, Ralph Castain wrote:

Thought about this some more and realized that the orte progress engine wasn’t 
using the opal_progress_thread support functions, which include a “break” event 
to kick us out of just such problems. So I changed it on the master. From your 
citing of libevent 2.0.22, I believe that must be where you are working, yes?

If so, give the changed version a try and see if your problem is resolved.



On Jan 15, 2015, at 12:55 AM, Ralph Castain  wrote:

Given that you could only reproduce it with either your custom compiler or by 
forcibly introducing a delay, is this indicating an issue with the custom 
compiler? It does seem strange that we don't see this anywhere else, given the 
number of times that code gets run.

Only alternative solution I can think of would be to push the finalize request 
into the event loop, and thus execute the loopbreak from within an event. You 
might try and see if that solves the problem.



On Jan 14, 2015, at 11:54 PM, Leonid  wrote:

Hi all.

I believe there is a bug in event_base_loop() function from file event.c 
(opal/mca/event/libevent2022/libevent/).

Consider the case when application is going to be finalized and both 
event_base_loop() and event_base_loopbreak() are called in the same time in 
parallel threads.

Then if event_base_loopbreak() happens to acquire lock first, it will set 
"event_base->event_break = 1", but won't send any signal to event loop, because 
it did not started yet.

After that, event_base_loop() will acquire the lock and will clear event_break flag with the 
following statement: "base->event_gotterm = base->event_break = 0;". Then it 
will go into polling with timeout = -1 and thus block forever.

This issue was reproduced on a custom compiler (using Lulesh benchmark and x86 
4-core PC), but it can be also reproduced for me with GCC compiler (on almost 
any benchmark and in same HW settings) by putting some delay to 
orte_progress_thread_engine() function:

static void* orte_progress_thread_engine(opal_object_t *obj)
{
while (orte_event_base_active) {
   usleep(1000); // add sleep to allow orte_ess_base_app_finalize() set 
orte_event_base_active flag to false
   opal_event_loop(orte_event_base, OPAL_EVLOOP_ONCE);
}
return OPAL_THREAD_CANCELLED;
}

I am not completely sure what should be the best fix for described problem.



___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26181.php

___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26185.php

___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26188.php

___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/01/26191.php




Re: [OMPI users] MPI_type_create_struct + MPI_Type_vector + MPI_Type_contiguous

2015-01-16 Thread George Bosilca
 The subarray creation is an multi-dimension extension of the vector type. You 
can see it as a vector of vector of vector and so on, one vector per dimension. 
The stride array is used to declare on each dimension what is the relative 
displacement (in number of elements) from the beginning of the dimension array.

It is important to use regular type creation when you can take advantage of 
such regularity instead of resorting to use of struct or h*. This insure better 
packing/unpacking performance, as well as possible future support for one-sided 
communications.

George.



> On Jan 15, 2015, at 19:31, Gus Correa  wrote:
> 
> I never used MPI_Type_create_subarray, only MPI_Type_Vector.
> What I like about MPI_Type_Vector is that you can define a stride,
> hence you can address any regular pattern in memory.
> However, it envisages the array layout in memory as a big 1-D array,
> with a linear index progressing in either Fortran or C order.
> 
> Somebody correct me please if I am wrong, but at first sight MPI_Type_Vector 
> sounds more flexible to me than MPI_Type_create_subarray, exactly because the 
> latter doesn't have strides.
> 
> The downside is that you need to do some index arithmetic to figure
> the right strides, etc, to match the corresponding
> Fortran90 array sections.
> 
> There are good examples in the "MPI - The complete reference" books I 
> suggested to you before (actually in vol 1).
> 
> Online I could find the two man pages (good information, but no example):
> 
> http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_vector.3.php
> http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_create_subarray.3.php
> 
> There is a very simple 2D example of MPI_Type_vector using strides here:
> 
> https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types
> 
> and a similar one here:
> 
> http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html
> 
> Gus Correa
> 
>> On 01/15/2015 06:53 PM, Diego Avesani wrote:
>> dear George, dear Gus, dear all,
>> Could you please tell me where I can find a good example?
>> I am sorry but I can not understand the 3D array.
>> 
>> 
>> Really Thanks
>> 
>> Diego
>> 
>> 
>> On 15 January 2015 at 20:13, George Bosilca > > wrote:
>> 
>> 
>>>On Jan 15, 2015, at 06:02 , Diego Avesani >>> wrote:
>>> 
>>>Dear Gus, Dear all,
>>>Thanks a lot.
>>>MPI_Type_Struct works well for the first part of my problem, so I
>>>am very happy to be able to use it.
>>> 
>>>Regarding MPI_TYPE_VECTOR.
>>> 
>>>I have studied it and for simple case it is clear to me what id
>>>does (at least I believe). Foe example if I have a matrix define as:
>>>REAL, ALLOCATABLE (AA(:,:))
>>>ALLOCATE AA(100,5)
>>> 
>>>I could send part of it defining
>>> 
>>>CALL MPI_TYPE_VECTOR(5,1,5,MPI_DOUBLE_PRECISION,/MY_NEW_TYPE/)
>>> 
>>>after that I can send part of it with
>>> 
>>>CALL MPI_SEND( AA(1:/10/,:), /10/, /MY_NEW_TYPE/, 1, 0,
>>>MPI_COMM_WORLD );
>>> 
>>>Have I understood correctly?
>>> 
>>>What I can do in case of three dimensional array? for example
>>>AA(:,:,:), I am looking to MPI_TYPE_CREATE_SUBARRAY.
>>>Is that the correct way?
>>> 
>>>Thanks again
>> 
>>Indeed, using the subarray is the right approach independent on the
>>number of dimensions of the data (you can use it instead of
>>MPI_TYPE_VECTOR as well).
>> 
>>   George.
>> 
>> 
>>> 
>>> 
>>> 
>>> 
>>>Diego
>>> 
>>> 
>>>On 13 January 2015 at 19:04, Gus Correa >>> wrote:
>>> 
>>>Hi Diego
>>>I guess MPI_Type_Vector is the natural way to send and receive
>>>Fortran90 array sections (e.g. your QQMLS(:,50:100,:)).
>>>I used that before and it works just fine.
>>>I think that is pretty standard MPI programming style.
>>>I guess MPI_Type_Struct tries to emulate Fortran90 and C
>>>structures
>>>(as you did in your previous code, with all the surprises
>>>regarding alignment, etc), not array sections.
>>>Also, MPI type vector should be more easy going (and probably
>>>more efficient) than MPI type struct, with less memory
>>>alignment problems.
>>>I hope this helps,
>>>Gus Correa
>>> 
>>>PS - These books have a quite complete description and several
>>>examples
>>>of all MPI objects and functions, including MPI types (native
>>>and user defined):
>>>http://mitpress.mit.edu/books/__mpi-complete-reference-0
>>>
>>>http://mitpress.mit.edu/books/__mpi-complete-reference-1
>>>
>>> 
>>>[They cover MPI 1 and 2. I guess there is a new/upcoming book
>>>with MPI 3, but for what you're doing 1 and 2 are more than
>>>enough.]
>>> 
>>> 
>>>On 01/13