renaming did the trick! Now I can create the library but the parallel part is not working (things are much slower in parallel). I will try to figure that out next.
Is this a good list to ask R + Fortran + MPI questions or should I go to a different place? Thanks! On Tue, Aug 11, 2015 at 2:21 PM Tom Wainwright <thomas.wainwri...@noaa.gov> wrote: > Not sure, but your problem might be answered in the .Fortran() help page: > > All Fortran compilers known to be usable to compile R map symbol names to >> lower case, and so does .Fortran. >> > > I've been caught by that before, and found that using all lowercase names > for Fortran routines in R is safest. > > Tom Wainwright > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > The contents of this message are mine personally and do not > necessarily reflect any position of the Government or the > National Oceanic and Atmospheric Administration. > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > On Tue, Aug 11, 2015 at 10:32 AM, Ignacio Martinez <ignaci...@gmail.com> > wrote: > >> I'm trying to create a package that uses a MPI Fortran module. I have a >> working >> > version <https://github.com/ignacio82/MyPi> of that package that uses a > > >> Fortran module without MPI. >> >> When I run the function `FMPIpi(DARTS = 5000, ROUNDS = 100, cores=2)` I >> get >> the following errors: >> >> > FMPIpi(DARTS = 5000, ROUNDS = 100, cores=2) >> 2 slaves are spawned successfully. 0 failed. >> master (rank 0, comm 1) of size 3 is running on: 2d60fd60575b >> slave1 (rank 1, comm 1) of size 3 is running on: 2d60fd60575b >> slave2 (rank 2, comm 1) of size 3 is running on: 2d60fd60575b >> Error in .Fortran("MPIpi", avepi = as.numeric(1), DARTS = >> as.integer(DARTS), : >> "mpipi" not resolved from current namespace (MyPi) >> >> It looks like something is wrong in my NAMESPACE. This is what I have >> there: >> >> export(Pibenchmark) >> export(Fpi) >> export(FMPIpi) >> export(Rpi) >> useDynLib(MyPi) >> exportPattern("^[[:alpha:]]+") >> This is my Fortran module: >> >> Module Fortranpi >> USE MPI >> IMPLICIT NONE >> contains >> subroutine dboard(darts, dartsscore) >> integer, intent(in) :: darts >> double precision, intent(out) :: dartsscore >> double precision :: x_coord, y_coord >> integer :: score, n >> >> score = 0 >> do n = 1, darts >> call random_number(x_coord) >> call random_number(y_coord) >> >> if ((x_coord**2 + y_coord**2) <= 1.0d0) then >> score = score + 1 >> end if >> end do >> >> dartsscore = 4.0d0*score/darts >> >> end subroutine dboard >> >> subroutine pi(avepi, DARTS, ROUNDS) bind(C, name="pi_") >> use, intrinsic :: iso_c_binding, only : c_double, >> c_int >> real(c_double), intent(out) :: avepi >> integer(c_int), intent(in) :: DARTS, ROUNDS >> integer :: MASTER, rank, i, n >> integer, allocatable :: seed(:) >> double precision :: pi_est, homepi, pirecv, pisum >> >> ! we set it to zero in the sequential run >> rank = 0 >> ! initialize the random number generator >> ! we make sure the seed is different for each task >> call random_seed() >> call random_seed(size = n) >> allocate(seed(n)) >> seed = 12 + rank*11 >> call random_seed(put=seed(1:n)) >> deallocate(seed) >> >> avepi = 0 >> do i = 0, ROUNDS-1 >> call dboard(darts, pi_est) >> ! calculate the average value of pi over all iterations >> avepi = ((avepi*i) + pi_est)/(i + 1) >> end do >> end subroutine pi >> >> >> subroutine MPIpi(avepi, DARTS, ROUNDS) bind(C, name="MPIpi_") >> use, intrinsic :: iso_c_binding, only : c_double, >> c_int >> real(c_double), intent(out) :: avepi >> integer(c_int), intent(in) :: DARTS, ROUNDS >> integer :: i, n, mynpts, ierr, numprocs, >> proc_num >> integer, allocatable :: seed(:) >> double precision :: pi_est, y, sumpi >> >> call mpi_init(ierr) >> call mpi_comm_size(MPI_COMM_WORLD, numprocs, ierr) >> call mpi_comm_rank(MPI_COMM_WORLD, proc_num, ierr) >> >> if (numprocs .eq. 0) then >> mynpts = ROUNDS - (numprocs-1)*(ROUNDS/numprocs) >> else >> mynpts = ROUNDS/numprocs >> endif >> >> ! initialize the random number generator >> ! we make sure the seed is different for each task >> call random_seed() >> call random_seed(size = n) >> allocate(seed(n)) >> seed = 12 + proc_num*11 >> call random_seed(put=seed(1:n)) >> deallocate(seed) >> >> y=0.0d0 >> do i = 1, mynpts >> call dboard(darts, pi_est) >> y = y + pi_est >> end do >> >> call mpi_reduce(y, sumpi, 1, mpi_double_precision, mpi_sum, 0, & >> mpi_comm_world, ierr) >> if (proc_num==0) avepi = sumpi/ROUNDS >> call mpi_finalize(ierr) >> end subroutine MPIpi >> >> end module Fortranpi >> and this is my R function >> >> #'@export >> FMPIpi <- function(DARTS, ROUNDS, cores) { >> Rmpi::mpi.spawn.Rslaves(nslaves=cores) >> retvals <- .Fortran("MPIpi", avepi = as.numeric(1), DARTS = >> as.integer(DARTS), ROUNDS = as.integer(ROUNDS)) >> return(retvals$avepi) >> } >> >> What am I doing wrong? >> >> Thanks a lot! >> >> Ignacio >> >> [[alternative HTML version deleted]] >> >> ______________________________________________ >> R-package-devel@r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-package-devel >> > > [[alternative HTML version deleted]] ______________________________________________ R-package-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-package-devel