Cell-centered in parallel does only work with the AMGBackend as
LinearSolver. If you change the property in your problem file, you
should be fine.
Maybe we should set the AMGBackend as default solver to prevent users
from walking into that trap.
Kind regards
Bernd
On 05/27/2015 05:38 PM, Tr
Hi Bernd,
Thank you for your help. It works for me. I think i will just take the
DgfGridCreator for parallel YaspGrid runs.
I have also another question about parallel run with implicit cell-centered
method of 2p model. When I try to run /dumux/test/implicit/2p/test_cc2p
(CubeGrid) in parallel, I
In fact, I will not fix the CubeGridCreator for parallel YaspGrid. Until
and including Dune 2.3, the YaspGrid specialization of the
StructuredGridFactory of Dune only creates a sequential YaspGrid. This
is fixed in Dune 2.4 where the sequential and parallel YaspGrid
constructors have been unifi
THIS IS AN AUTOMATED MESSAGE, DO NOT REPLY.
The following task is now closed:
FS#265 - Decoupled 2p2c does not run in parallel
User who did this - Bernd Flemisch (bernd)
Reason for closing: Won't fix
Additional comments about closing: will be fixed upstream by Dune 2.4
More information can be f
THIS IS AN AUTOMATED MESSAGE, DO NOT REPLY.
The following task has a new comment added:
FS#265 - Decoupled 2p2c does not run in parallel
User who did this - Bernd Flemisch (bernd)
--
Actually, it runs in parallel on YaspGrid, if the DgfGridCreator instead of the
CubeGridCreator is chose
Yes, this works. It seems that the instabilities result from the coupling.
However, the time steps are extremely small (something like 0.1-0.01 s) and the
Newton solver takes up to about 12 iterations to find out that it does not
converge (and then reduces the time step). So I guess my great-gra
Hi Tri Dat,
it is much easier than I thought. I just forgot to specify an overlap in
the dgf file. This is necessary for YaspGrid if the decoupled models are
suppossed to run properly in parallel. Doing this right seems to give me
the correct result for parallel runs.
You can try for yoursel
THIS IS AN AUTOMATED MESSAGE, DO NOT REPLY.
The following task has a new comment added:
FS#267 - MPI exit with error: MPI_Op_free after MPI_FINALIZE
User who did this - Bernd Flemisch (bernd)
--
I understand it now. Our GridCreators have (a pointer to) the grid as a static
member variab
Ok. Can you try to use your multidomain setting, but just don't couple
by providing normal boundary conditions such as no-flux for each
subdomain also at the coupling interface? Does this work?
Since both SuperLU and Pardiso are direct solvers, I would not expect
that one can solve a system th
Hi Georg,
I switched to UMFPack, it is better* and also a direct solver. Pardiso
could be used, but you need a license.
Have a look at at one of our Stokes examples how to use UMFPack.
* Faster, terminates more reliably with singular matrices and
subjectively converges better.
Bye
Christoph
--
Hi Bernd,
I already use SuperLU. Btw what about the PARDISO solver? Can it be used for
multidomain applications and could this be an improvement?
Kind regards
Georg
Von: Dumux [mailto:dumux-boun...@listserv.uni-stuttgart.de] Im Auftrag von
Bernd Flemisch
Gesendet: Mittwoch, 27. Mai 2015 12:36
I don't understand why the linear solver should get into trouble by this
modification. What linear solver do you use? If you use an iterative
one, can you check if you run into an exception from the preconditioner
when compiling with debug options, and also double-check by replacing
the iterati
12 matches
Mail list logo