Dan,

Before you pursue the error too deeply, I have been able to load the file 
across 3 nodes (12 cores). But it succeeded only twice.

It may be that we have a node or two going bad or that I need to move all of my 
fipy installation, including gmsh, onto the high performance disk.

When I get consistent behavior I'll post the results... success or failure.

Thanks,

Bill

On Mar 28, 2014, at 11:01 AM, "Seufzer, William J. (LARC-D307)" 
<[email protected]> wrote:

> Dan,
> 
> We've made a little progress. I now get an error! Answers to your questions 
> are embedded.
> 
> On Mar 27, 2014, at 2:26 PM, Daniel Wheeler <[email protected]> wrote:
> 
>> On Thu, Mar 27, 2014 at 10:25 AM, Seufzer, William J. (LARC-D307)
>> <[email protected]> wrote:
>>> Dan,
>>> 
>>> We're not there yet. I got back to this issue late yesterday and the fix
>>> doesn't work on my cluster.
>>> 
>>> I added the code that was recommended at: The workaround:
>>> https://gist.github.com/wd15/9693712 but it appears now that we hang on the
>>> line:
>>> 
>>> mesh = fp.Gmsh3D(mshFile)
>> 
>> Did you try just running that file independently from anything else?
> 
> Yes, I'm running the simple file that just tries to read in the mesh.
> 
>> 
>>> As with reading the .geo file the cores are busy but after an hour the code
>>> does not go past the mesh = .... command.
>> 
>> I'm not sure what's going on, but can all the processes read the
>> mshFile? Check that. It's odd that this works for me across nodes and
>> not for you.
>> 
>> Also when I run your original problem:
>> 
>>  import fipy as fp
>>  geoFile = 'tmsh3d.geo'
>>  mesh = fp.Gmsh3D(geoFile)
>>  mx,my,mz = mesh.getCellCenters()
>> 
>> I get "IOError: [Errno 2] No such file or directory:
>> '/tmp/tmpt49sGo.msh'". Did you actually get that error?
> 
> I did not get this error. The cores, for some reason just hung and saturated 
> the cores.
> I have since confirmed from our Sys Admins that that nodes in the cluster do 
> NOT share a /tmp area. Each node has it's own. That helps explain why was 
> able to run several cores on one node and not across nodes.
> 
>> 
>>> Question: If procID==0 runs gmsh to make the .msh file, why not just run
>>> gmsh by hand (or script) and have the mesh file ready to go?
>> 
>> Absolutely, just automating it away.
>> 
>>> Is there a way to change FiPy so that the users disk space is used instead
>>> of /tmp ? Then if one core creates the file all the cores, on various nodes,
>>> will see it.
>> 
>> Not FiPy, but just tell Python. I didn't think of that first time, but
>> it is the simplest solution.
>> 
>>  http://docs.python.org/2/library/tempfile.html#tempfile.tempdir
>> 
>> Just set 'tempfile.temdir" to a shared directory at the top of the
>> script. I believe that will work for all uses of tempfile within the
>> current Python session.
>> 
>> This works for me across two nodes
>> 
>>   import tempfile
>>   tempfile.tempdir = './'
>>   import fipy as fp
>>   geoFile = 'tmsh3d.geo'
>>   mesh = fp.Gmsh3D(geoFile)
>>   mx,my,mz = mesh.getCellCenters()
>>   print mx
>> 
> 
> I put in the import tempfile and redirected the tmp directory to a shared 
> area. Now I get an error!
> 
> There error is (coming from 12 cores):
> 
> EnvironmentError: Gmsh version must be >= 2.0.
>    raise EnvironmentError("Gmsh version must be >= 2.0.")
> EnvironmentError    raise EnvironmentError("Gmsh version must be >= 2.0.")
> EnvironmentError:     raise EnvironmentError("Gmsh version must be >= 2.0.")
> EnvironmentError: Gmsh version must be >= 2.0.
>    raise EnvironmentError("Gmsh version must be >= 2.0.")
> EnvironmentError: Gmsh version must be >= 2.0.
> EnvironmentError: EnvironmentError: Gmsh version must be >= 2.0.
> : Gmsh version must be >= 2.0.
> Gmsh version must be >= 2.0.
> Gmsh version must be >= 2.0.
> 
> When I check the gmsh version is:
> 
>> gmsh -version
> --------------------------------------------------------------------------
> Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011
>       The PETSc Team
>    [email protected]
> http://www.mcs.anl.gov/petsc/
> See docs/copyright.html for copyright information
> See docs/changes/index.html for recent updates.
> See docs/troubleshooting.html for problems.
> See docs/manualpages/index.html for help. 
> Libraries linked from 
> /home/geuzaine/src/petsc-3.1-p8/linux_complex_mumps_seq/lib
> --------------------------------------------------------------------------
> 2.8.3
> 
> Interesting... getting an error feels like great progress! :)
> 
> Thanks,
> 
> Bill
> _______________________________________________
> fipy mailing list
> [email protected]
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


_______________________________________________
fipy mailing list
[email protected]
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to