Hi Mike,

Yes: the MPI spec defines collective operations to avoid the kind of massive 
MPI sends and receives that you're rightly anxious to avoid. The MPI operation 
you need is called Allreduce, which takes individual values from each MPI rank, 
combines them with a specified mathematical operation, and then distributes the 
result back to each rank. Your parallel normalization will then look something 
like


from mpi4py import MPI


def normalize(rho):
    factor = 1.0 / ((rho*dv).globalValue).sum()


    MPI.COMM_WORLD.Allreduce(factor, MPI.SUM)

    for i in range(rho.shape[0]):
        rho[i] = rho[i]*factor



Note that there's a warning in mpi4py about the naive implementation of this 
function: Every rank will send a copy of factor to rank zero, which will 
compute the sum and broadcast it back out to all other ranks. This is fine, but 
may scale poorly for large numbers (hundreds+) of MPI ranks. For summing one 
double over a few tens of CPUs, it should not be a problem.


Good luck,

Trevor



Trevor Keller, Ph.D.
Materials Science and Engineering Division
National Institute of Standards and Technology
100 Bureau Dr. MS 8550; Gaithersburg, MD 20899
Office: 223/A131 or (301) 975-2889



________________________________
From: [email protected] <[email protected]> on behalf of Michael Waters 
<[email protected]>
Sent: Tuesday, October 20, 2015 3:22 PM
To: FIPY
Subject: Best way to implement parallel normalization?

Hello, I am trying to implement a parallel normalization function. Right now I 
have something like this:

rc, zc = mesh.cellCenters
dv = 2.0*pi*rc*dr*dz # cylindrical mesh


def normalize(rho):
    factor = 1.0 / ((rho*dv).globalValue).sum()

    for i in range(rho.shape[0]):
        rho[i] = rho[i]*factor

Is there an efficient way I can do this where I don't need to reference the 
individual cells (and all the associated MPI communication for each node)? Can 
the summations for the normalizing factor be on each node then collected?  I am 
very new to parallel FiPy and would love any feedback.

Thanks,
-Mike Waters





_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to