Re: [OMPI users] Is this an OpenMPI bug?

2009-02-23 Thread Jeff Squyres

On Feb 20, 2009, at 6:54 PM, -Gim wrote:

I am trying to use the mpi_bcast function in fortran.  I am using  
open-mpi-v-1.2.7


Say x is a real variable of size 100. np =100  I try to bcast this  
to all the processors.


I use call mpi_bcast(x,np,mpi_real,0,ierr)


Aren't you missing the communicator argument in there?

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Is this an OpenMPI bug?

2009-02-21 Thread Terry Frankcombe
When you say "a real variable", you mean default real, no crazy implicit
typing or anything?

I think if x is real(8) you'd see what you say you see.


On Fri, 2009-02-20 at 18:54 -0500, -Gim wrote:
> I am trying to use the mpi_bcast function in fortran.  I am using
> open-mpi-v-1.2.7
> 
> Say x is a real variable of size 100. np =100  I try to bcast this to
> all the processors. 
> 
> I use call mpi_bcast(x,np,mpi_real,0,ierr) 
> 
> When I do this and try to print the value from the resultant
> processor, exactly half the values gets broadcast.  In this case, I
> get 50 correct values in the resultant processor and rest are junk.
> Same happened when i tried with np=20.. Exactly 10 values gets
> populated and rest are junk.!!
> 
> ps: I am running this in a single processor. ( Just testing purposes )
> I run this with "mpirun -np 4  "
> 
> Cheerio,
> Gim
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



[OMPI users] Is this an OpenMPI bug?

2009-02-20 Thread -Gim
I am trying to use the mpi_bcast function in fortran.  I am using
open-mpi-v-1.2.7

Say x is a real variable of size 100. np =100  I try to bcast this to all
the processors.

I use call mpi_bcast(x,np,mpi_real,0,ierr)

When I do this and try to print the value from the resultant processor,
exactly half the values gets broadcast.  In this case, I get 50 correct
values in the resultant processor and rest are junk.  Same happened when i
tried with np=20.. Exactly 10 values gets populated and rest are junk.!!

ps: I am running this in a single processor. ( Just testing purposes ) I run
this with "mpirun -np 4  "

Cheerio,
Gim