Hi Ben, His was some 450 cores.
All that I did was replace serialmesh with Parallelmesh in my diver routine. I will look into the output in some more detail. Manav On Apr 4, 2013, at 8:00 AM, "Kirk, Benjamin (JSC-EG311)" <[email protected]> wrote: > That sounds like a good savings - how many cores? > > ParallelMesh should be capable of writing pieces to many files, or streaming > into one file. In the latter case the mesh should be completely compatible > with SerialMesh. > > -Ben > > On Apr 4, 2013, at 12:23 AM, "Manav Bhatia" <[email protected]> wrote: > >> Hi Roy, >> >> At this point, I do not have a need for off-processor element data. So, the >> current status of ParallelMesh could be a good thing. >> >> I did give it a go for my application, and so far it seems to be working >> well. The memory footprint of each process has also come down significantly >> (from ~4GB to ~0.8GB), which is great! >> >> I noticed that the .xdr restart solutions are now written one per mesh >> block. This seems to suggest that this can be read into a ParallelMesh data >> structure for a restart, and not a SerialMesh. Is this correct? >> >> Thanks, >> Manav >> >> On Apr 3, 2013, at 2:25 AM, Roy Stogner <[email protected]> wrote: >> >>> >>> >>> On Wed, 3 Apr 2013, Manav Bhatia wrote: >>> >>>> As a related question, if my code is running on a multicore machine, >>>> then can I use --n-threads to parallelize both the matrix assembly >>>> and the Petsc linear solvers? Or do I have to use mpi for Petsc? >>> >>> PETSc isn't multithreaded, but I'm told it can be built to use >>> third-party preconditioners which are multithreaded, so that you get >>> decent scaling out of your solve. I haven't done this myself. >>> >>>> I am running problems with over a million elements, and using mpi on >>>> my multicore machine makes each process consume over 1GB of RAM. >>> >>> ParallelMesh was invented to get me out of a similar jam. >>> >>>> On Apr 3, 2013, at 1:24 AM, Manav Bhatia <[email protected]> wrote: >>>> >>>>> I am curious if the parallel mesh is now suitable for general use. >>> >>> Unfortunately ParallelMesh may never be suitable for "general" use, >>> because the most general SerialMesh-using codes sometimes assume at >>> the application level that every process can see every element. If >>> your problem includes contact, integro-differential terms, or any such >>> coupling beyond the layer of ghost elements that ParallelMesh exposes, >>> then you have to do some very careful manual communications to make >>> that work on a distributed mesh. >>> >>> ParallelMesh is also still much less tested than SerialMesh - it works >>> with all the examples and all the compatible application codes I've >>> tried, but I wouldn't be surprised if there are tricky AMR or other >>> corner cases where it breaks in nasty ways. >>> >>> More testing would certainly be appreciated. >>> --- >>> Roy >> >> >> ------------------------------------------------------------------------------ >> Minimize network downtime and maximize team effectiveness. >> Reduce network management and security costs.Learn how to hire >> the most talented Cisco Certified professionals. Visit the >> Employer Resources Portal >> http://www.cisco.com/web/learning/employer_resources/index.html >> _______________________________________________ >> Libmesh-users mailing list >> [email protected] >> https://lists.sourceforge.net/lists/listinfo/libmesh-users ------------------------------------------------------------------------------ Minimize network downtime and maximize team effectiveness. Reduce network management and security costs.Learn how to hire the most talented Cisco Certified professionals. Visit the Employer Resources Portal http://www.cisco.com/web/learning/employer_resources/index.html _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
