Dear Daniel,

Thank you for your response.

In my script I do not use any viewer. I want to run the script on 
supercomputers.
Here is the script, it crashes on 1 CPU too.

import sys
from os import getcwd, chdir,system,listdir,mkdir
from mpi4py import *
from fipy import *
from fipy import parallel
from fipy.meshes.numMesh.uniformGrid2D import UniformGrid2D

def 
print_out_variable(variable_name,variable_array,mesh,results_directory,filename_flag):
    pwd=getcwd()
    chdir(results_directory)
    file=open("image_"+filename_flag,"w")
    file.write("# x y phi\n")
    #file.write(str(variable_array.size)+"\n")
    #file.write(str(mesh.shape)+"\n")
    i=0
    while i < int(variable_array.size):
        x_value=mesh[0][i]
        y_value=mesh[1][i]
        variable_value=variable_array[i]
        wstring=" "+str(x_value)+" "+str(y_value)+" "+str(variable_value)+"\n"
        file.write(wstring)
        i=i+1
    file.close()
    chdir(pwd)
    return()

if parallel.procID == 0:
    submission_directory="/home/x_ferta/FiPY/Diffusion"
    results_directory_name="RESULTS"
    results_directory=submission_directory+"/"+results_directory_name
    mkdir(results_directory)

#solver=LinearLUSolver

print(solver)

nx = 20
ny = nx
dx = 1.
dy = dx
L = dx * nx
mesh = Grid2D(dx=dx, dy=dy, nx=nx, ny=ny)
###mesh = UniformGrid2D(dx=dx, dy=dy, nx=nx, ny=ny)
###mesh_vector=CellVariable(name="mesh(x,y)", 
mesh=mesh,value=mesh.getCellCenters())
print "%d cells on processor %d of %d" \
  % (mesh.getNumberOfCells(), parallel.procID, parallel.Nproc)

phi = CellVariable(name = "solution variable",mesh = mesh,value = 0.)

D = 1.
eq = TransientTerm() == DiffusionTerm(coeff=D)

valueTopLeft = 0
valueBottomRight = 1

x, y = mesh.getFaceCenters()
facesTopLeft = ((mesh.getFacesLeft() & (y > L / 2))| (mesh.getFacesTop() & (x < 
L / 2)))
facesBottomRight = ((mesh.getFacesRight() & (y < L / 2))| 
(mesh.getFacesBottom() & (x > L / 2)))
BCs = (FixedValue(faces=facesTopLeft, 
value=valueTopLeft),FixedValue(faces=facesBottomRight, value=valueBottomRight))

timeStepDuration = 10 * 0.9 * dx**2 / (2 * D)
steps = 10
for step in range(steps):
    print("before solve",step)
    eq.solve(var=phi,boundaryConditions=BCs,dt=timeStepDuration)
    print("after solve",step)
    ###all_phi=phi.getGlobalValue()
    ###all_mesh_vector=mesh_vector.getGlobalValue()
    ###if parallel.procID == 0:
    ###    
print_out_variable("phi",all_phi,all_mesh_vector,results_directory,str(step))



Thank you for your help!
Best wishes,
Ferenc

*
"If there's something you can't change then why worry about it?" by Fauja Singh,
 a 100 years old marathon runner
*


On Jan 4, 2012, at 5:47 PM, Daniel Wheeler wrote:

> 
> 
> On Wed, Jan 4, 2012 at 3:20 AM, Ferenc Tasnadi <[email protected]> wrote:
> 
> 
> Fine. After that I wanted to try the diffusion/mesh20x20.py with using 
> FIPY_SOLVERS=Trilinos, but
> it works only if the mesh is not larger than 10x10. In case I chose a larger 
> one I get segmentation fault (signal 11).
> With Pysparse on 1 CPU the script (mesh20x20.py) works just fine with large 
> meshes.
> 
> Does it work if you comment out all the lines in mesh20x20.py that refer to 
> viewer?
>  
> Do you have any idea what went wrong during the installation and how to fix 
> it? I am not an expert in
> installing trilinos. I use python2.7 and trilinos-10.8.3.
> 
> As Jon suggested. It is probably the viewer's inability to plot meshes with 0 
> cell meshes.
> 
> -- 
> Daniel Wheeler
> _______________________________________________
> fipy mailing list
> [email protected]
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to