Hello all,
Thank you for a very useful package. Me and my colleagues have two questions 
about partitioning for
solving in parallel and implicit time integration.
 
1) Partitioning
This question has as I understand it, been raised before ...
The default partitioning of the grid when running in parallel is to divide the 
mesh into equal slices/slabs distributed on the different processor nodes when 
solving a 2D/3D problem. If assuming an equidistant grid (nx=ny=nz) This would 
imply that the maximum number of node that can be used to solve the problem is 
limited by the number nx, where the number of processor nodes must less or 
equal to nx.
As we are using thermodynamics and kinetics coupled to an external package (TQ) 
and the calculation of these properties for each mesh point is computationally 
intense, we would like to use more processor nodes than is possible with this 
type of partitioning. Instead of "slicing" the mesh we would like to have a 
partitioning in a 3D equidistant simulation, so that volumes of the size 
"nx/N*ny/N*nz/N" where N^3 is the number of processor nodes, is used. Is this 
possible ?

2) Implicity
When solving the Cahn-Hilliard problem using thermodynamics and kinetics 
provided from an external package we find that these are only evaluated at the 
beginning of the timestep implying the problem is solved explicitly. It should 
be noted we in this case don't have explicit expression for diffusivity as 
these are temperature and composition dependent. From my understanding the FiPy 
package should be able to solve this type of problem implicitly with higher 
degree of accuracy than using a fully explicit integration scheme. How do we 
ensure that all terms are solved
implicitly?

Thank you!

Joakim

_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to