Dear Jonathan,

Thank you for your response. I have tried it on 8 CPUs and even on 1 CPU, but 
it does not work.
I run it on a supercomputer, so I allocate 1 node ( 1 node is with 8 cores). In 
case of 16 CPUs I
used 2 nodes.

Here is my script in case:

import sys
from os import getcwd, chdir,system,listdir,mkdir
from mpi4py import *
from fipy import *
from fipy import parallel
from fipy.meshes.numMesh.uniformGrid2D import UniformGrid2D

def 
print_out_variable(variable_name,variable_array,mesh,results_directory,filename_flag):
    pwd=getcwd()
    chdir(results_directory)
    file=open("image_"+filename_flag,"w")
    file.write("# x y phi\n")
    #file.write(str(variable_array.size)+"\n")
    #file.write(str(mesh.shape)+"\n")
    i=0
    while i < int(variable_array.size):
        x_value=mesh[0][i]
        y_value=mesh[1][i]
        variable_value=variable_array[i]
        wstring=" "+str(x_value)+" "+str(y_value)+" "+str(variable_value)+"\n"
        file.write(wstring)
        i=i+1
    file.close()
    chdir(pwd)
    return()

if parallel.procID == 0:
    submission_directory="/home/x_ferta/FiPY/Diffusion"
    results_directory_name="RESULTS"
    results_directory=submission_directory+"/"+results_directory_name
    mkdir(results_directory)

print(solver)

nx = 20
ny = nx
dx = 1.
dy = dx
L = dx * nx
mesh = Grid2D(dx=dx, dy=dy, nx=nx, ny=ny)
###mesh = UniformGrid2D(dx=dx, dy=dy, nx=nx, ny=ny)
###mesh_vector=CellVariable(name="mesh(x,y)", 
mesh=mesh,value=mesh.getCellCenters())
print "%d cells on processor %d of %d" \
  % (mesh.getNumberOfCells(), parallel.procID, parallel.Nproc)

phi = CellVariable(name = "solution variable",mesh = mesh,value = 0.)

D = 1.
eq = TransientTerm() == DiffusionTerm(coeff=D)

valueTopLeft = 0
valueBottomRight = 1

x, y = mesh.getFaceCenters()
facesTopLeft = ((mesh.getFacesLeft() & (y > L / 2))| (mesh.getFacesTop() & (x < 
L / 2)))
facesBottomRight = ((mesh.getFacesRight() & (y < L / 2))| 
(mesh.getFacesBottom() & (x > L / 2)))
BCs = (FixedValue(faces=facesTopLeft, 
value=valueTopLeft),FixedValue(faces=facesBottomRight, value=valueBottomRight))

timeStepDuration = 10 * 0.9 * dx**2 / (2 * D)
steps = 10
for step in range(steps):
    print("before solve",step)
    eq.solve(var=phi,boundaryConditions=BCs,dt=timeStepDuration)
    print("after solve",step)
    ###all_phi=phi.getGlobalValue()
    ###all_mesh_vector=mesh_vector.getGlobalValue()
    ###if parallel.procID == 0:
    ###    
print_out_variable("phi",all_phi,all_mesh_vector,results_directory,str(step))



and the output with 8 CPUs:

trilinos
trilinos
trilinos
trilinos
trilinos
trilinos
trilinos
trilinos
120 cells on processor 1 of 8
80 cells on processor 0 of 8
120 cells on processor 2 of 8
120 cells on processor 5 of 8
160 cells on processor 7 of 8
120 cells on processor 3 of 8
120 cells on processor 6 of 8
120 cells on processor 4 of 8
('before solve', 0)
('before solve', 0)
('before solve', 0)
('before solve', 0)
('before solve', 0)
('before solve', 0)
('before solve', 0)
('before solve', 0)
3 total processes killed (some possibly by mpirun during cleanup)

mpirun noticed that process rank 3 with PID 14178 on node m77 exited on signal 
11 (Segmentation fault).


and with 1CPU:

trilinos
400 cells on processor 0 of 1
('before solve', 0)

mpirun noticed that process rank 0 with PID 13636 on node m118 exited on signal 
11 (Segmentation fault).



Thank you very much for your help!

Best wishes,
Ferenc

*
"If there's something you can't change then why worry about it?" by Fauja Singh,
 a 100 years old marathon runner
*


On Jan 4, 2012, at 3:07 PM, Jonathan Guyer wrote:

> 
> On Jan 4, 2012, at 3:20 AM, Ferenc Tasnadi wrote:
> 
>> Fine. After that I wanted to try the diffusion/mesh20x20.py with using 
>> FIPY_SOLVERS=Trilinos, but
>> it works only if the mesh is not larger than 10x10. In case I chose a larger 
>> one I get segmentation fault (signal 11).
>> With Pysparse on 1 CPU the script (mesh20x20.py) works just fine with large 
>> meshes.
> 
> Have you tried with fewer than 16 processors? FiPy parallelizes the Grid 
> meshes by cutting them into slabs and by the time it slices 20 cells into 16 
> groups, some nodes don't end up having any cells on them at all. I suspect 
> that this is what's causing the problem. FiPy should be more robust to this 
> situation (and I thought that we tested that it was), but when I try to run 
> diffusion/mesh20x20.py with 16 processors, FiPy tries to launch the Mayavi 
> viewer instead of Matplotlib, which tells me that it doesn't understand that 
> it has 2D meshes.
> 
> Reduce the number of processors (8 worked for me) and see if that helps.
> 
> In the next release of FiPy, you will be able to use Gmsh to obtain more 
> efficient partitioning and should be able to assign many more nodes to a 
> 20x20 mesh (although there's probably not much benefit in doing so; we find 
> that the Grid slabs actually perform quite well).
> 
>> Do you have any idea what went wrong during the installation and how to fix 
>> it? I am not an expert in
>> installing trilinos. I use python2.7 and trilinos-10.8.3.
> 
> I doubt anything is wrong with your PyTrilinos installation, but please let 
> us know if you can't solve diffusion/mesh20x20.py on a smaller number of 
> nodes (start with 2).
> 
> 
> _______________________________________________
> fipy mailing list
> [email protected]
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to