Re: [OMPI users] mpif90 problem.

2006-03-06 Thread Benoit Semelin



Second topic:
I am using 3 processors
I am calling a series of MPI_SCATTER which work when I send  
messages of

5 ko to the other processors, fails at the second scatter if I sent
messages of ~10 ko, and fails at the first scatter for bigger  
messages.

The message is:
   



What is "ko" -- did you mean "kb"?
 



I meant kilobytes (not kilobits). Sorry for that. It comes from 
"kilo-octet" in french where "octet"=byte.



2 processes killed (possibly by Open MPI)
   



 

Could this be a problem of maximum allowed message size? Or of  
buffering

space?
   



No, Open MPI should allow scattering of arbitrary sized messages.   
Can you verify that your arguments to MPI_SCATTER are correct, such  
as buffer length, the receive sizes on the clients, etc.?
 



Actually this part of the the code works fine with another mpi 
implementation for much larger messages...If it can help, here
are relevant parts of the codes. 


INTEGER, PARAMETER :: nb_proc=4, master=0
INTEGER, PARAMETER :: message_size=1000
INTEGER, parameter :: part_array_size=message_size*nb_proc

TYPE :: PART
 integer :: p_type
 real(KIND=8), dimension(3) :: POS
 real(KIND=8), dimension(3) :: VEL
 real(KIND=8) :: u
 real(KIND=8) :: star_age
 real(KIND=8) :: mass
 real(KIND=8) :: frac_mass1
 real(KIND=8) :: h
 real(KIND=8) :: dens
END TYPE PART

TYPE(PART), dimension(part_array_size) :: part_array

!!!
! Declaration of the MPI type for PART !
!!!

call MPI_TYPE_EXTENT(MPI_INTEGER,mpi_integer_length,mpi_err)
array_of_block_length(1:2) = (/1,12/)
array_of_types(1:2) = (/MPI_INTEGER,MPI_DOUBLE_PRECISION/)
array_of_displacement(1) = 0
array_of_displacement(2) = MPI_integer_length
call MPI_TYPE_CREATE_STRUCT(2,array_of_block_length,array_of_displacement &
   ,array_of_types,MPI_part,mpi_err)
call MPI_TYPE_COMMIT(MPI_part,mpi_err)

call MPI_TYPE_EXTENT(MPI_PART,mpi_part_length,mpi_err)

!!
! The communication call...
!!

< snip

Here sone code filling part_array with data

snip >

call MPI_SCATTER(part_array,nb_sent,MPI_PART,MPI_IN_PLACE,nb_sent, &
MPI_PART,root,MPI_COMM_WORLD,mpi_err)

( I ensure nb_send <= message_size)


Are any corefiles generated?  Do you know which processes die?

 

Yes, it generates one core file in this case (message_size=1000). And in 
this case with 4 processes, 3 die:

"3 processes killed (possibly by Open MPI)"



Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Xiaoning (David) Yang
Jeff,

Thank you for the reply. In other words, MPI_IN_PLACE only eliminates data
movement on root, right?

David

* Correspondence *



> From: Jeff Squyres 
> Reply-To: Open MPI Users 
> Date: Fri, 3 Mar 2006 19:18:52 -0500
> To: Open MPI Users 
> Subject: Re: [OMPI users] MPI_IN_PLACE
> 
> On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:
> 
>>   call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
>>  &  MPI_COMM_WORLD,ierr)
>> 
>> Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
>> Thanks for any help!
> 
> MPI_IN_PLACE is an MPI-2 construct, and is defined in the MPI-2
> standard.  Its use in MPI_REDUCE is defined in section 7.3.3:
> 
> http://www.mpi-forum.org/docs/mpi-20-html/node150.htm#Node150
> 
> It says:
> 
> "The ``in place'' option for intracommunicators is specified by
> passing the value MPI_IN_PLACE to the argument sendbuf at the root.
> In such case, the input data is taken at the root from the receive
> buffer, where it will be replaced by the output data."
> 
> In the simple pi example program, it doesn't make much sense to use
> MPI_IN_PLACE except as an example to see how it is used (i.e., it
> won't gain much in terms of efficiency because you're only dealing
> with a single MPI_DOUBLE_PRECISION).  But you would want to put an
> "if" statement around the call to MPI_REDUCE and pass MPI_IN_PLACE as
> the first argument, and mypi as the second argument for the root.
> For all other processes, use the same MPI_REDUCE that you're using now.
> 
> -- 
> {+} Jeff Squyres
> {+} The Open MPI Project
> {+} http://www.open-mpi.org/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Jeff Squyres
Generally, yes.  There are some corner cases where we have to  
allocate additional buffers, but that's the main/easiest benefit to  
describe.  :-)



On Mar 6, 2006, at 11:21 AM, Xiaoning (David) Yang wrote:


Jeff,

Thank you for the reply. In other words, MPI_IN_PLACE only  
eliminates data

movement on root, right?

David

* Correspondence *




From: Jeff Squyres 
Reply-To: Open MPI Users 
Date: Fri, 3 Mar 2006 19:18:52 -0500
To: Open MPI Users 
Subject: Re: [OMPI users] MPI_IN_PLACE

On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:


  call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
 &  MPI_COMM_WORLD,ierr)

Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
Thanks for any help!


MPI_IN_PLACE is an MPI-2 construct, and is defined in the MPI-2
standard.  Its use in MPI_REDUCE is defined in section 7.3.3:

http://www.mpi-forum.org/docs/mpi-20-html/node150.htm#Node150

It says:

"The ``in place'' option for intracommunicators is specified by
passing the value MPI_IN_PLACE to the argument sendbuf at the root.
In such case, the input data is taken at the root from the receive
buffer, where it will be replaced by the output data."

In the simple pi example program, it doesn't make much sense to use
MPI_IN_PLACE except as an example to see how it is used (i.e., it
won't gain much in terms of efficiency because you're only dealing
with a single MPI_DOUBLE_PRECISION).  But you would want to put an
"if" statement around the call to MPI_REDUCE and pass MPI_IN_PLACE as
the first argument, and mypi as the second argument for the root.
For all other processes, use the same MPI_REDUCE that you're using  
now.


--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/




Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Xiaoning (David) Yang
I'm not quite sure how collective computation calls work. For example, for
an MPI_REDUCE with MPI_SUM, do all the processes collect values from all the
processes and calculate the sum and put result in recvbuf on root? Sounds
strange.

David

* Correspondence *



> From: Jeff Squyres 
> Reply-To: Open MPI Users 
> Date: Mon, 6 Mar 2006 13:22:23 -0500
> To: Open MPI Users 
> Subject: Re: [OMPI users] MPI_IN_PLACE
> 
> Generally, yes.  There are some corner cases where we have to
> allocate additional buffers, but that's the main/easiest benefit to
> describe.  :-)
> 
> 
> On Mar 6, 2006, at 11:21 AM, Xiaoning (David) Yang wrote:
> 
>> Jeff,
>> 
>> Thank you for the reply. In other words, MPI_IN_PLACE only
>> eliminates data
>> movement on root, right?
>> 
>> David
>> 
>> * Correspondence *
>> 
>> 
>> 
>>> From: Jeff Squyres 
>>> Reply-To: Open MPI Users 
>>> Date: Fri, 3 Mar 2006 19:18:52 -0500
>>> To: Open MPI Users 
>>> Subject: Re: [OMPI users] MPI_IN_PLACE
>>> 
>>> On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:
>>> 
   call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
  &  MPI_COMM_WORLD,ierr)
 
 Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
 Thanks for any help!
>>> 
>>> MPI_IN_PLACE is an MPI-2 construct, and is defined in the MPI-2
>>> standard.  Its use in MPI_REDUCE is defined in section 7.3.3:
>>> 
>>> http://www.mpi-forum.org/docs/mpi-20-html/node150.htm#Node150
>>> 
>>> It says:
>>> 
>>> "The ``in place'' option for intracommunicators is specified by
>>> passing the value MPI_IN_PLACE to the argument sendbuf at the root.
>>> In such case, the input data is taken at the root from the receive
>>> buffer, where it will be replaced by the output data."
>>> 
>>> In the simple pi example program, it doesn't make much sense to use
>>> MPI_IN_PLACE except as an example to see how it is used (i.e., it
>>> won't gain much in terms of efficiency because you're only dealing
>>> with a single MPI_DOUBLE_PRECISION).  But you would want to put an
>>> "if" statement around the call to MPI_REDUCE and pass MPI_IN_PLACE as
>>> the first argument, and mypi as the second argument for the root.
>>> For all other processes, use the same MPI_REDUCE that you're using
>>> now.
>>> 
>>> -- 
>>> {+} Jeff Squyres
>>> {+} The Open MPI Project
>>> {+} http://www.open-mpi.org/
>>> 
>>> 
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> -- 
> {+} Jeff Squyres
> {+} The Open MPI Project
> {+} http://www.open-mpi.org/
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Jeff Squyres

On Mar 6, 2006, at 3:38 PM, Xiaoning (David) Yang wrote:

I'm not quite sure how collective computation calls work. For  
example, for an MPI_REDUCE with MPI_SUM, do all the processes  
collect values from all the processes and calculate the sum and put  
result in recvbuf on root? Sounds strange.


The implementation of how MPI_REDUCE works is not specified by the  
standard.  Only the semantics are specified (when MPI_REDUCE with  
MPI_SUM returns, the root's recvbuf has the sum of all the data from  
the non-root processes).  As such, an MPI implementation is free to  
implement it however it wishes.


There has been a considerable amount of research on how to optimize  
collective algorithm implementations in MPI over the past ~5 years  
(and outside of MPI for 20+ years before that).


--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/




Re: [OMPI users] MPI_IN_PLACE

2006-03-06 Thread Graham E Fagg

Hi David
 yep they do (reduce the values to a single location) and in a tree 
topology it would look something like the following:



proc   3  4  5   6
local values   30 40 50  60
partial sums   -  -  -   -


proc1  2
local values10 20
partial sums10+30+40 (80)  20+50+60 (130)


proc 0
local values 0
partial sums 0+80+130 = 210

result at root (0)   210

For in place the value (0) at the root would be in its Send buffer

The MPI_IN_PLACE option is more important for allreduce as it saves lots 
of potential local data movement.


I suggest that you look on the web for a MPI primer or tutorial to gain 
more understanding.


G.


On Mon, 6 Mar 2006, Xiaoning (David) Yang wrote:


I'm not quite sure how collective computation calls work. For example, for
an MPI_REDUCE with MPI_SUM, do all the processes collect values from all the
processes and calculate the sum and put result in recvbuf on root? Sounds
strange.

David

* Correspondence *




From: Jeff Squyres 
Reply-To: Open MPI Users 
Date: Mon, 6 Mar 2006 13:22:23 -0500
To: Open MPI Users 
Subject: Re: [OMPI users] MPI_IN_PLACE

Generally, yes.  There are some corner cases where we have to
allocate additional buffers, but that's the main/easiest benefit to
describe.  :-)


On Mar 6, 2006, at 11:21 AM, Xiaoning (David) Yang wrote:


Jeff,

Thank you for the reply. In other words, MPI_IN_PLACE only
eliminates data
movement on root, right?

David

* Correspondence *




From: Jeff Squyres 
Reply-To: Open MPI Users 
Date: Fri, 3 Mar 2006 19:18:52 -0500
To: Open MPI Users 
Subject: Re: [OMPI users] MPI_IN_PLACE

On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:


  call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
 &  MPI_COMM_WORLD,ierr)

Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
Thanks for any help!


MPI_IN_PLACE is an MPI-2 construct, and is defined in the MPI-2
standard.  Its use in MPI_REDUCE is defined in section 7.3.3:

http://www.mpi-forum.org/docs/mpi-20-html/node150.htm#Node150

It says:

"The ``in place'' option for intracommunicators is specified by
passing the value MPI_IN_PLACE to the argument sendbuf at the root.
In such case, the input data is taken at the root from the receive
buffer, where it will be replaced by the output data."

In the simple pi example program, it doesn't make much sense to use
MPI_IN_PLACE except as an example to see how it is used (i.e., it
won't gain much in terms of efficiency because you're only dealing
with a single MPI_DOUBLE_PRECISION).  But you would want to put an
"if" statement around the call to MPI_REDUCE and pass MPI_IN_PLACE as
the first argument, and mypi as the second argument for the root.
For all other processes, use the same MPI_REDUCE that you're using
now.

--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Thanks,
Graham.
--
Dr Graham E. Fagg   | Distributed, Parallel and Meta-Computing
Innovative Computing Lab. PVM3.4, HARNESS, FT-MPI, SNIPE & Open MPI
Computer Science Dept   | Suite 203, 1122 Volunteer Blvd,
University of Tennessee | Knoxville, Tennessee, USA. TN 37996-3450
Email: f...@cs.utk.edu  | Phone:+1(865)974-5790 | Fax:+1(865)974-8296
Broken complex systems are always derived from working simple systems
--


[OMPI users] MPI for DSP

2006-03-06 Thread 赖俊杰
hello everyone,I'm a research assistant at Tsinghua University.
And now,i begin to study the MPI for DSP.
Can anybody tell me something on this field?
thanks.

laij...@mails.tsinghua.edu.cn
  2006-03-07