Mike -

The most expedient way to get this normalization is

  rho / (rho.cellVolumeAverage * mesh.cellVolumes.sum())

`cellVolumeAverage` and `cellVolumes.sum()` automatically take care of the MPI 
communication and account for not double-counting the overlaps between 
partitions.

Also, if you use CylindricalGrid2D, then mesh.cellVolumes will automatically 
account for the radial variation.

You shouldn't ever find yourself looping over cells. Let FiPy (which lets 
numpy) do that.

- Jon

On Oct 20, 2015, at 3:22 PM, Michael Waters <[email protected]> wrote:

> Hello, I am trying to implement a parallel normalization function. Right now 
> I have something like this:
> 
> rc, zc = mesh.cellCenters
> dv = 2.0*pi*rc*dr*dz # cylindrical mesh
> 
> 
> def normalize(rho):
>     factor = 1.0 / ((rho*dv).globalValue).sum()
>     
>     for i in range(rho.shape[0]):
>         rho[i] = rho[i]*factor
> 
> Is there an efficient way I can do this where I don't need to reference the 
> individual cells (and all the associated MPI communication for each node)? 
> Can the summations for the normalizing factor be on each node then collected? 
>  I am very new to parallel FiPy and would love any feedback.
> 
> Thanks,
> -Mike Waters
> 
> 
> 
> 
> 
> _______________________________________________
> fipy mailing list
> [email protected]
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to