On Wed, 3 Apr 2013, Manav Bhatia wrote:

> As a related question, if my code is running on a multicore machine,
> then can I use --n-threads to parallelize both the matrix assembly
> and the Petsc linear solvers? Or do I have to use mpi for Petsc?

PETSc isn't multithreaded, but I'm told it can be built to use
third-party preconditioners which are multithreaded, so that you get
decent scaling out of your solve.  I haven't done this myself.

> I am running problems with over a million elements, and using mpi on
> my multicore machine makes each process consume over 1GB of RAM.

ParallelMesh was invented to get me out of a similar jam.

> On Apr 3, 2013, at 1:24 AM, Manav Bhatia <[email protected]> wrote:
>
>>  I am curious if the parallel mesh is now suitable for general use.

Unfortunately ParallelMesh may never be suitable for "general" use,
because the most general SerialMesh-using codes sometimes assume at
the application level that every process can see every element.  If
your problem includes contact, integro-differential terms, or any such
coupling beyond the layer of ghost elements that ParallelMesh exposes,
then you have to do some very careful manual communications to make
that work on a distributed mesh.

ParallelMesh is also still much less tested than SerialMesh - it works
with all the examples and all the compatible application codes I've
tried, but I wouldn't be surprised if there are tricky AMR or other
corner cases where it breaks in nasty ways.

More testing would certainly be appreciated.
---
Roy

------------------------------------------------------------------------------
Minimize network downtime and maximize team effectiveness.
Reduce network management and security costs.Learn how to hire 
the most talented Cisco Certified professionals. Visit the 
Employer Resources Portal
http://www.cisco.com/web/learning/employer_resources/index.html
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to