Johan Hoffman wrote:
> Hi all,
> 
> Connected to this discussion is also the msc thesis work on dolfin
> parallization of Nicklas Jansson at KTH. He has now started working on
> this based on the updated TODO list of dolfin. He has tried to send an
> email to this list ([email protected]) but it appears that it is stuck
> in a filter awaiting moderator approval. 

If he joins the list, he'll be able to make posts.

Maybe someone (a moderator) could
> help out so that we can get past this, to better coordinate
> parallelization efforts?
> 

One point on the TODO list: we discussed some time ago the mesh 
partitioning, and decided against ParMETIS or METIS because they do not 
use a GPL (compatible) license. Magnus has implemented a nice 
partitioning interface which uses SOCTCH which does have a GPL 
compatible license.

Garth

> Thanks!
> 
> /Johan
> 
> 
>>
>> Anders Logg wrote:
>>> On Sat, Dec 01, 2007 at 05:02:13PM +0000, Garth N. Wells wrote:
>>>> Looks like you forgot to add MPIManager to the repository.
>>>>
>>>> Do we want a class MPIManager, or should we let PETSc take of this? If
>>>> we
>>>> create an MPI object ourselves, it will probably clash with PETSc.
>>> We need it if we sometimes want to use MPI without PETSc (which is not
>>> unlikely even if PETSc is the default).
>>>
>>> MPIManager works like PETScManager and takes care of the global
>>> initialization at startup:
>>>
>>>   MPIManager::init();
>>>
>>> and also calls finalize() automatically when the program exits. It
>>> talks to MPI to see if it has already been initialized (by PETSc,
>>> itself or someone else) and does nothing if that is the case.
>>>
>> OK.
>>
>> Garth
>> _______________________________________________
>> DOLFIN-dev mailing list
>> [email protected]
>> http://www.fenics.org/mailman/listinfo/dolfin-dev
>>
> 
> 
_______________________________________________
DOLFIN-dev mailing list
[email protected]
http://www.fenics.org/mailman/listinfo/dolfin-dev

Reply via email to