Hello everyone,
At times it can be useful to combine different data types in the same
problem. For example, there are parts of an algorithm which may be memory
bounded (requiring float) and operations where more precision is needed
(requiring double or float_128).
Can single and double precision
Thanks a lot Matt!
Were you referring to
http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex62.c.html
?
I do not see any statements related to PCFieldSplit there. Am I
missing something here?
Thanks,
Hom Nath
On Tue, Feb 9, 2016 at 10:19 AM, Matthew Knepley
albert gutiƩrrez writes:
> Hello everyone,
>
> At times it can be useful to combine different data types in the same
> problem. For example, there are parts of an algorithm which may be memory
> bounded (requiring float) and operations where more precision is needed
>
Thank you so much Barry!
For my small test case, with -pc_fieldsplit_block_size 4, the program
runs, although the answer was not correct. At least now I get
something to look upon. I am using PCFieldSplitSetIS to set the
fields. Do I still need to use -pc_fieldsplit_block_size?
In my case each
On Tue, Feb 9, 2016 at 7:06 AM, Florian Lindner wrote:
> Hello,
>
> I use PETSc with 4 MPI processes and I experience different results
> when using different distribution of rows amoung ranks. The code looks
> like that:
>
The default PC is BJacobi/ILU. This depends on the
If you don't specify preconditioner via -pc_type XXX, the default being
used is BJacobi-ILU.
This preconditioner will yield different results on different numbers of
MPI-processes, and will yield different results for a fixed number of
different MPI-processes, but with a different matrix
On Tue, Feb 9, 2016 at 9:10 AM, Hom Nath Gharti wrote:
> Thank you so much Barry!
>
> For my small test case, with -pc_fieldsplit_block_size 4, the program
> runs, although the answer was not correct. At least now I get
> something to look upon. I am using PCFieldSplitSetIS
Hello,
I use PETSc with 4 MPI processes and I experience different results
when using different distribution of rows amoung ranks. The code looks
like that:
KSPSetOperators(_solver, _matrixC.matrix, _matrixC.matrix);
// _solverRtol = 1e-9
KSPSetTolerances(_solver, _solverRtol, PETSC_DEFAULT,
Addition. The KSP Solver shows very different convergence:
WRONG:
[0] KSPConvergedDefault(): Linear solver has converged. Residual norm
6.832362172732e+06 is less than relative tolerance 1.e-09
times initial right hand side norm 6.934533099989e+15 at iteration 8447
RIGHT:
[0]
On Tue, Feb 9, 2016 at 9:31 AM, Hom Nath Gharti wrote:
> Thanks a lot Matt!
>
> Were you referring to
>
> http://www.mcs.anl.gov/petsc/petsc-current/src/snes/examples/tutorials/ex62.c.html
> ?
>
> I do not see any statements related to PCFieldSplit there. Am I
> missing
Sounds interesting! Thanks a lot Matt! I will have a look.
Hom Nath
On Tue, Feb 9, 2016 at 10:36 AM, Matthew Knepley wrote:
> On Tue, Feb 9, 2016 at 9:31 AM, Hom Nath Gharti wrote:
>>
>> Thanks a lot Matt!
>>
>> Were you referring to
>>
>>
11 matches
Mail list logo