Hi James,

When I say parallel I refer in general to the use of multi
processor/core/threads to speed up simulations.
However, in this type of simulations the speed up is usually obtained by
domain decomposition.
The computational for each sub-domain is assigned to a different MPI
process.
A lot of the low level available software (solution of linear systems,
etc.) was designed to use one CPU for each MPI process, since the
multi-core and especially multi-thread are relatively new stuff. (e.g. for
PETSc, I am not familiar with dune)

Using multi-cores with the same software (one core each MPI process) still
gives reasonable scaling, multi-thread doesn't give any benefit at all.
PETSc is moving towards an Hybrid-implementation that ideally uses
domain-decomposition at CPUs/core level, and multi-core/multi-threads to
perform local (non MPI) parallelization (e.g. vector parallelization, etc).
Note, this second type of parallelization algorithms are partially
implemented by the compiler.
This page could be a a good starting point if you want to look at this more
in detail:
http://www.mcs.anl.gov/petsc/miscellaneous/petscthreads.html

Regarding the Unstructured meshes you are absolutely right: " If you are
not extremely careful, your simulation will run VERY SLOWLY with an
unstructured grid". However, the problem has been already tackled years ago
and there are already good software packages that have great performance
(e.g. PT-Scotch, Par-Metis). These packages can handle mapping/computing
load distribution, and they are usually called within libraries such as
PETSc. Having said that, I am sure there still a lot to do....but if the
performance for unstructured grid is taken care by these packages, it is
not too bad.

On the other hand unstructured gives great flexibility to discretise
problems. In the particular case of reservoir simulations for problems
presenting faults, or when you want to refine a mesh close to a well, etc.

I hope I explained myself a bit better, and  sorry for my poor terminology
and on computing stuff, I am an engineer trying to explain myself the best
I can

Paolo










On Wed, Aug 27, 2014 at 5:34 PM, wireless <wirel...@tampabay.rr.com> wrote:

> On 08/27/14 08:34, Paolo Orsini wrote:
>
>  For that reason I can comment only from a user point of view.
>> One of the reasons we are looking at OPM is because we are looking to
>> for an open source black-oil simulator which we can start testing,
>> using, and eventually help with the development of the missing components.
>>
>
>  At the moment the main priorities we look for are:
>> - parallel
>> - unstructured grid.
>>
>
>  Higher order discretization methods are less important. CV first order,
>> eventually second, would be more than enough (at least for us).
>>  From what I understood both grid interfaces can deal with unstructured
>> grids, but A) is not parallel.
>> However A) is more developed for black oil applications.
>>
>
>  Paolo
>>
>
> Could you elaborate on what your ideal (conceptional) need/desire is
> for "parallel". As a computer scientist, I'm curious as to the
> hardware/software issues you seek to solve (make more efficient and faster)
> by increased parallelism and all suggestions to achieve a more robust
> simulation.
>
> What exactly is your context (details) of what you need/want in an
> unstructured grid ?
>
>
> Grids --> meshes [1] http://en.wikipedia.org/wiki/Types_of_mesh
>
> <link_snip>: "An unstructured grid is identified by irregular
> connectivity. It cannot easily be expressed as a two-dimensional or
> three-dimensional arrays in computer memory. "  If you are not extremely
> careful, your simulation will run VERY SLOWLY with an unstructured grid.
>
>
> James
>
> _______
>
>> Opm mailing list
>> Opm@opm-project.org
>> http://www.opm-project.org/mailman/listinfo/opm
>>
>
>
>
> _______________________________________________
> Opm mailing list
> Opm@opm-project.org
> http://www.opm-project.org/mailman/listinfo/opm
>
_______________________________________________
Opm mailing list
Opm@opm-project.org
http://www.opm-project.org/mailman/listinfo/opm

Reply via email to