Thanks, Hong,
I see. It is better if "-snes_type test" did not exist at the first place.
Fande,
On Mon, Jun 4, 2018 at 4:01 PM, Zhang, Hong wrote:
>
>
> > On Jun 4, 2018, at 4:59 PM, Zhang, Hong wrote:
> >
> > -snes_type has been removed. We can just use -snes_test_jacobian
> instead. Note
> On Jun 4, 2018, at 4:59 PM, Zhang, Hong wrote:
>
> -snes_type has been removed. We can just use -snes_test_jacobian instead.
> Note that the test is done every time the Jacobian is computed.
It was meant to be "-snes_type test".
> Hong (Mr.)
>
>> On Jun 4, 2018, at 3:27 PM, Kong, Fande
-snes_type has been removed. We can just use -snes_test_jacobian instead. Note
that the test is done every time the Jacobian is computed.
Hong (Mr.)
> On Jun 4, 2018, at 3:27 PM, Kong, Fande wrote:
>
> Hi PETSc Team,
>
> I was wondering if "snes_type test" has been gone? Quite a few MOOSE
Hello,
I am using KSP in KSPPREONLY mode to do a direct solve on an A*x = b
system, with solver algorithms MUMPS, CPardiso and Pardiso. For Pardiso,
is it possible to control the solver execution step (denoted "phase" in
Intel's docs)? I would like to be able to control when it refactors as one
Looks like `-snes_test_jacobian` and `-snes_test_jacobian_view` are the
options to use...
On Mon, Jun 4, 2018 at 2:27 PM, Kong, Fande wrote:
> Hi PETSc Team,
>
> I was wondering if "snes_type test" has been gone? Quite a few MOOSE users
> use this option to test their Jacobian matrices.
>
> If
Hi PETSc Team,
I was wondering if "snes_type test" has been gone? Quite a few MOOSE users
use this option to test their Jacobian matrices.
If it is gone, any reason?
Fande,
Miachael, I can compile and run you test. I am now profiling it. Thanks.
--Junchao Zhang
On Mon, Jun 4, 2018 at 11:59 AM, Michael Becker <
michael.bec...@physik.uni-giessen.de> wrote:
> Hello again,
> this took me longer than I anticipated, but here we go.
> I did reruns of the cases where
Hello again,
this took me longer than I anticipated, but here we go.
I did reruns of the cases where only half the processes per node were
used (without -log_sync):
125 procs,1st 125 procs,2nd 1000 procs,1st
1000 procs,2nd
Max Ratio Max
On Mon, Jun 4, 2018 at 1:03 PM, Jean-Yves LExcellent <
jean-yves.l.excell...@ens-lyon.fr> wrote:
>
> Thanks for the details of your needs.
>
> For the first application, the sparse RHS feature with distributed
> solution should effectively be fine.
>
I'll add parallel support of this feature in
Ok, I now realize that I had implemented the boundary conditions in an
unnecessarily complicated way... As you pointed out, I can just
manipulate individual matrix rows to enforce the BC's. In that way, I
never have to call MatPtAP, or do any expensive operations. Probably
that's what is
Thanks for the details of your needs.
For the first application, the sparse RHS feature with distributed
solution should effectively be fine.
For the second one, a future distributed RHS feature (not currently
available in MUMPS) might help if the centralized sparse RHS is too
memory
On Mon, Jun 4, 2018 at 3:50 AM, Lukas van de Wiel <
lukas.drinkt.t...@gmail.com> wrote:
> Hi Matt,
>
> Whoa, thanks for the invite, but that is a bit too short of a notice for
> the circumstances.
> I will gladly come to the next one. And if it is far away, I have good
> excuse to prepend a
Hi Matt,
Whoa, thanks for the invite, but that is a bit too short of a notice for
the circumstances.
I will gladly come to the next one. And if it is far away, I have good
excuse to prepend a holiday to it. :-)
WIll it be in the beginning of June in 2019 as well?
Cheers
Lukas
On Fri, Jun 1,
13 matches
Mail list logo