Hi all,
I've built a MOOSE application and the problem I'm studying solves great
using serial LU. However, when I try to solve in parallel using super
LU, I encounter many DIVERGED_LINE_SEARCH errors using the default bt
line search. If I switch to line_search=none, then instead of
It solved beautifully with MUMPS. Thank you Matt!
Alex
On 09/16/2015 10:58 AM, Matthew Knepley wrote:
On Wed, Sep 16, 2015 at 8:28 AM, Alexander Lindsay <adlin...@ncsu.edu
<mailto:adlin...@ncsu.edu>> wrote:
Hi all,
I've built a MOOSE application and the problem I'm stu
bsm...@mcs.anl.gov>
wrote:
>
>
> > On Dec 12, 2017, at 11:26 AM, Alexander Lindsay <
> alexlindsay...@gmail.com> wrote:
> >
> > Ok, I'm going to go back on my original statement...the physics being
> run here is a sub-set of a much larger set of physics; for the curr
types chosen. I would say the problem was definitely on our end!
On Tue, Dec 12, 2017 at 2:49 PM, Matthew Knepley <knep...@gmail.com> wrote:
> On Tue, Dec 12, 2017 at 3:19 PM, Alexander Lindsay <
> alexlindsay...@gmail.com> wrote:
>
>> I'm helping deb
I'm not using any hand-coded Jacobians.
Case 1 options: -snes_fd -pc_type lu
0 Nonlinear |R| = 2.259203e-02
0 Linear |R| = 2.259203e-02
1 Linear |R| = 7.821248e-11
1 Nonlinear |R| = 2.258733e-02
0 Linear |R| = 2.258733e-02
1 Linear |R| = 5.277296e-11
2 Nonlinear |R| =
the behavior, I would expect both `-snes_mf_operator -snes_fd`
and `-snes_fd` to suffer from the same approximations, right?
On Tue, Dec 12, 2017 at 9:43 AM, Matthew Knepley <knep...@gmail.com> wrote:
> On Tue, Dec 12, 2017 at 11:30 AM, Alexander Lindsay <
> alexlindsay...@gmail.com&g
On Tue, Dec 12, 2017 at 10:39 AM, Matthew Knepley <knep...@gmail.com> wrote:
> On Tue, Dec 12, 2017 at 12:26 PM, Alexander Lindsay <
> alexlindsay...@gmail.com> wrote:
>
>> Ok, I'm going to go back on my original statement...the physics being run
>> here
I'm working with a relatively new set of physics (new to me) and the
Jacobians are bad. While debugging the Jacobians, I've been running with
different finite difference approximations. I've found in general that
matrix-free approximation of the Jacobian action leads to much better
convergence
This question comes from modeling mechanical contact with MOOSE; from
talking with Derek Gaston, this has been a topic of conversation before...
With contact, our residual function is not continuous. Depending on the
values of our displacements, we may or may not have mechanical contact
resulting
Looks like `-snes_test_jacobian` and `-snes_test_jacobian_view` are the
options to use...
On Mon, Jun 4, 2018 at 2:27 PM, Kong, Fande wrote:
> Hi PETSc Team,
>
> I was wondering if "snes_type test" has been gone? Quite a few MOOSE users
> use this option to test their Jacobian matrices.
>
> If
Is there a way to mimic the behavior of `-snes_type test` with new PETSc?
E.g. don't attempt to perform any solves? They're stupid but we have a
bunch of tests in MOOSE that are set-up only to test the Jacobian, and any
attempts to actually solve the system are disastrous. I can hack around
this
Thank you Lisandro, that seems to work perfectly for what we want!
On Sat, Jun 30, 2018 at 4:28 AM, Lisandro Dalcin wrote:
> ./example.exe -snes_test_jacobian -snes_type ksponly -ksp_type preonly
> -pc_type none -snes_convergence_test skip
> On Sat, 30 Jun 2018 at 01:06, Alexande
Is there any elegant way to tell whether SNESComputeFunction is being
called under different conceptual contexts?
E.g. non-linear residual evaluation vs. Jacobian formation from finite
differencing vs. Jacobian-vector products from finite differencing?
Alex
n Jan 26, 2018, at 4:32 PM, Kong, Fande <fande.k...@inl.gov> wrote:
>> >
>> >
>> >
>> > On Fri, Jan 26, 2018 at 3:10 PM, Smith, Barry F. <bsm...@mcs.anl.gov>
>> > wrote:
>> >
>> >
>> > > On Jan 26, 2018, at 2:15
Is this regarded as failing just because of the unused option warning? The
`mpicxx` wrapper specifies `-Wl,-flat_namespace` which is indeed going to
be unused during preprocessing...
On Wed, Dec 4, 2019 at 11:40 AM Alexander Lindsay
wrote:
> I'm currently unable to build superlu_dist dur
f it believes you're
>> compiling PETSc code using an incompatible MPI.
>>
>> Note that some of this is hidden in the environment on Cray systems, for
>> example, where CC=cc regardless of what compiler you're actually using.
>>
>> Alexander Lindsay writes:
>>
Alright, I think the version checking info is all that I need. Thanks!
What's the cleanest way to determine the MPI install used to build PETSc?
We are configuring a an MPI-based C++ library with autotools that will
eventually be used by libMesh, and we'd like to make sure that this library
(as well as libMesh) uses the same MPI that PETSc used or at worst detect
It looks like Fande has attached the eigenvalue plots with the real axis
having a logarithmic scale. The same plots with a linear scale are attached
here.
The system has 306 degrees of freedom. 12 eigenvalues are unity for both
scaled and unscaled cases; this number corresponds to the number of
Ok, this is good to know. Yea we'll probably just roll back then. Thanks!
On Tue, May 12, 2020 at 12:45 PM Satish Balay wrote:
> On Tue, 12 May 2020, Matthew Knepley wrote:
>
> > On Tue, May 12, 2020 at 3:13 PM Alexander Lindsay <
> alexlindsay...@gmail.com>
> > wr
Does anyone have a suggestion for this compilation error from petscconf.h?
Sorry this is with a somewhat old PETSc version:
configure:34535: checking whether we can compile a trivial PETSc program
configure:34564: mpicxx -c -std=gnu++11
-I/opt/moose/petsc-3.11.4/mpich-3.3_gcc-9.2.0-opt/include
printing is done with SNESReasonView() and KSPReasonView()
>> I would suggest copying those files to Moose with a name change and
>> removing all the code you don't want. Then you can call your versions
>> immediately after SNESSolve() and KSPSolve().
>>
>>Barry
>>
>
To help debug the many emails we get about solves that fail to converge, in
MOOSE we recently appended `-snes_converged_reason -ksp_converged_reason`
for every call to `SNESSolve`. Of course, now we have users complaining
about the new text printed to their screens that they didn't have before.
w manual page
>
>
> On Jul 28, 2020, at 7:50 PM, Matthew Knepley wrote:
>
> On Tue, Jul 28, 2020 at 8:09 PM Alexander Lindsay <
> alexlindsay...@gmail.com> wrote:
>
>> The only slight annoyance with doing this through a PostSolve hook as
>> opposed to a plug
My interpretation of the documentation page of MatZeroRows is that if I've
set MAT_KEEP_NONZERO_PATTERN to true, then my sparsity pattern shouldn't be
changed by a call to it, e.g. a->imax should not change. However, at least
for sequential matrices, MatAssemblyEnd is called with
(11, 0.) (12, 5.) (13, 0.) (17, 0.)
>> row 13: (8, -1.) (12, -1.) (13, 4.) (14, -1.) (18, -1.)
>> row 14: (9, 0.) (13, 0.) (14, 5.) (19, 0.)
>> row 15: (10, -1.) (15, 4.) (16, -1.) (20, -1.)
>> row 16: (11, 0.) (15, 0.) (16, 5.) (17, 0.) (21, 0.)
>> row 1
Especially if the user has requested to keep their nonzero pattern, is
there any harm in calling MatAssembly with FLUSH instead of FINAL? Are
there users relying on MatZeroValues being their final assembly?
On Thu, Jul 15, 2021 at 8:51 AM Alexander Lindsay
wrote:
> On Thu, Jul 15, 2021 at 8
to be resolved...
On Thu, Jul 15, 2021 at 9:33 AM Alexander Lindsay
wrote:
> Especially if the user has requested to keep their nonzero pattern, is
> there any harm in calling MatAssembly with FLUSH instead of FINAL? Are
> there users relying on MatZeroValues being their final assembly?
&
breaks cmake or mpif90?]
>
> Satish
>
> On Mon, 28 Mar 2022, Alexander Lindsay wrote:
>
> > Ok, nothing to see here ... This was user error. I had MPI_ROOT set to a
> > different MPI install than that corresponding to the mpi in my PATH.
> >
> > On Mon, Mar 2
;
> > We've seen similar problems that have at least partially been dealt
> with in the main branch.
> >
> >Barry
> >
> >
> >
> > > On Mar 28, 2022, at 4:42 PM, Alexander Lindsay <
> alexlindsay...@gmail.com> wrote:
> > >
>
I know that PETSc has native support for ASPIN. Has anyone tried MSPIN? I
wouldn't be surprised if someone has implemented it in user code. Wondering
what the barriers would be to creating an option like `-snes_type mspin` ?
In the block matrices documentation, it's stated: "Note that for interlaced
storage the number of rows/columns of each block must be the same size" Is
interlacing defined in a global sense, or a process-local sense? So
explicitly, if I don't want the same size restriction, do I need to ensure
that
erlace in that way. You can still
> distribute pressure and velocity over all processes, but will need index
> sets to identify the velocity-pressure splits.
>
> Alexander Lindsay writes:
>
> > In the block matrices documentation, it's stated: "Note that for
> interlaced
Under what conditions can I use LSC preconditioning for field split
problems with Schur? Let's imagine that all I've done is called
SNESetJacobian with global A and P and provided the index sets for 0 and 1.
Based off of the documentation on the man page
That makes sense. Thanks for the quick reply!
On Fri, Nov 11, 2022 at 2:59 PM Matthew Knepley wrote:
> On Fri, Nov 11, 2022 at 5:57 PM Alexander Lindsay <
> alexlindsay...@gmail.com> wrote:
>
>> Under what conditions can I use LSC preconditioning for field split
>>
PM Alexander Lindsay
> wrote:
>
>> My understanding looking at PCFieldSplitSetDefaults is that our
>> implementation of `createfielddecomposition` should get called, we'll set
>> `fields` and then (ignoring possible user setting of
>> -pc_fieldsplit_%D_fields flag)
:
> On Mon, Nov 7, 2022 at 2:09 PM Alexander Lindsay
> wrote:
>
>> The libMesh/MOOSE specific code that identifies dof indices for
>> ISCreateGeneral is in DMooseGetEmbedding_Private. I can share that function
>> (it's quite long) or more details if that could be helpful.
. */
ierr = DMCreateFieldDecomposition_Moose(dm, len, namelist, innerislist,
dmlist);
CHKERRQ(ierr);
PetscFunctionReturn(0);
}
On Thu, Nov 3, 2022 at 5:19 PM Matthew Knepley wrote:
> On Thu, Nov 3, 2022 at 7:52 PM Alexander Lindsay
> wrote:
>
>> I hav
The libMesh/MOOSE specific code that identifies dof indices for
ISCreateGeneral is in DMooseGetEmbedding_Private. I can share that function
(it's quite long) or more details if that could be helpful.
On Mon, Nov 7, 2022 at 10:55 AM Alexander Lindsay
wrote:
> I'm not sure exactly what you m
We sometimes overallocate our sparsity pattern. Matrix assembly will
squeeze out allocations that we never added into/set. Is there a convenient
way to determine the size of the densest row post-assembly? I know that we
could iterate over rows and call `MatGetRow` and figure it out that way.
But
This is great. Thanks Matt!
On Sun, Nov 6, 2022 at 2:35 PM Matthew Knepley wrote:
> On Sun, Nov 6, 2022 at 5:31 PM Alexander Lindsay
> wrote:
>
>> We sometimes overallocate our sparsity pattern. Matrix assembly will
>> squeeze out allocations that we never added into/set.
I have errors on quite a few (but not all) processes of the like
[1]PETSC ERROR: - Error Message
--
[1]PETSC ERROR: Nonconforming object sizes
[1]PETSC ERROR: Local columns of A10 4137 do not equal local rows of A00
decomposition. Thanks Matt for
helping me process through this stuff!
On Tue, Nov 8, 2022 at 4:53 PM Alexander Lindsay
wrote:
> This is from our DMCreateFieldDecomposition_Moose routine. The IS size on
> process 1 (which is the process from which I took the error in the original
I was able to get it worked out, once I knew the issue, doing a detailed
read through our split IS generation. Working great (at least on this test
problem) now!
On Wed, Nov 9, 2022 at 12:45 PM Matthew Knepley wrote:
> On Wed, Nov 9, 2022 at 1:45 PM Alexander Lindsay
> wrote:
>
&g
Hi, is there a place I can look to understand the testing recipes used in
PETSc CI, e.g. what external packages are included (if any), what C++
dialect is used for any external packages built with C++, etc.?
Alex
Good to know. I may take a shot at it depending on need and time! Opened
https://gitlab.com/petsc/petsc/-/issues/1362 for doing so
Alex
On Sun, Apr 16, 2023 at 9:27 PM Pierre Jolivet
wrote:
>
> On 17 Apr 2023, at 1:10 AM, Alexander Lindsay
> wrote:
>
> Are there any plans to
I'm likely revealing a lot of ignorance, but in order to use HPDDM as a
preconditioner does my system matrix (I am using the same matrix for A and
P) need to be block type, e.g. baij or sbaij ? In MOOSE our default is aij
and I am currently getting
[1]PETSC ERROR: #1 buildTwo() at
buildTwo() at
/raid/lindad/moose/petsc/arch-moose/include/HPDDM_schwarz.hpp:1012
On Mon, Apr 17, 2023 at 4:55 PM Matthew Knepley wrote:
> I don't think so. Can you show the whole stack?
>
> THanks,
>
> Matt
>
> On Mon, Apr 17, 2023 at 6:24 PM Alexander Lindsay <
&g
If it helps: if I use those exact same options in serial, then no errors
and the linear solve is beautiful :-)
On Mon, Apr 17, 2023 at 4:22 PM Alexander Lindsay
wrote:
> I'm likely revealing a lot of ignorance, but in order to use HPDDM as a
> preconditioner does my system matrix (I am
sion=
>> 'c++17' # https://github.com/NVIDIA/AMGX/issues/231
>> config/BuildSystem/config/packages/elemental.py:self.maxCxxVersion =
>> 'c++14'
>> config/BuildSystem/config/packages/grid.py:self.maxCxxVersion = 'c++17'
>> config/BuildSystem/confi
Are there any plans to get the missing hook into PETSc for AIR? Just curious if
there’s an issue I can subscribe to or anything.
(Independently I’m excited to test HPDDM out tomorrow)
> On Apr 13, 2023, at 10:29 PM, Pierre Jolivet wrote:
>
>
>> On 14 Apr 2023, at 7:02 AM, Al
r
> similar to the -pc_fieldsplit_gkb_monitor
>
>
>
> On Apr 13, 2023, at 4:33 PM, Alexander Lindsay
> wrote:
>
> Hi, I'm trying to solve steady Navier-Stokes for different Reynolds
> numbers. My options table
>
> -dm_moose_fieldsplit_names u,p
> -dm_moose_nfiel
Hi, I'm trying to solve steady Navier-Stokes for different Reynolds
numbers. My options table
-dm_moose_fieldsplit_names u,p
-dm_moose_nfieldsplits 2
-fieldsplit_p_dm_moose_vars pressure
-fieldsplit_p_ksp_type preonly
-fieldsplit_p_pc_type jacobi
-fieldsplit_u_dm_moose_vars vel_x,vel_y
OpenMP is definitely linked in and appears in the stacktrace but I haven’t asked for any threads (to my knowledge).On Apr 13, 2023, at 7:03 PM, Mark Adams wrote:Are you using OpenMP? ("OMP").If so try without it.On Thu, Apr 13, 2023 at 5:07 PM Alexander Lindsay <alexlindsay
Pierre,
This is very helpful information. Thank you. Yes I would appreciate those
command line options if you’re willing to share!
> On Apr 13, 2023, at 9:54 PM, Pierre Jolivet wrote:
>
>
>
>>> On 13 Apr 2023, at 10:33 PM, Alexander Lindsay
>>> wrote:
&g
This is an interesting article that compares a multi-level ILU algorithm to
approximate commutator and augmented Lagrange methods:
https://doi.org/10.1002/fld.5039
On Wed, Jun 28, 2023 at 11:37 AM Alexander Lindsay
wrote:
> I do believe that based off the results in
> https://doi.org/1
Hi all, I've found that having fgmres as an outer solve and the initial
residual scaling that comes with it can cause difficulties for inner solves
as I'm approaching convergence, presumably because I'm running out of
precision. This is the kind of thing I would normally set an absolute
tolerance
This has been a great discussion to follow. Regarding
> when time stepping, you have enough mass matrix that cheaper
preconditioners are good enough
I'm curious what some algebraic recommendations might be for high Re in
transients. I've found one-level DD to be ineffective when applied
sed for that
> paper.
>
> Alexander Lindsay writes:
>
> > Sorry for the spam. Looks like these authors have published multiple
> papers on the subject
> >
> > cover.jpg
> > Combining the Augmented Lagrangian Preconditioner with the Simple Schur
> Complement A
Do you know of anyone who has applied the augmented Lagrange methodology to a finite volume discretization?On Jul 6, 2023, at 6:25 PM, Matthew Knepley wrote:On Thu, Jul 6, 2023 at 8:30 PM Alexander Lindsay <alexlindsay...@gmail.com> wrote:This is an interesting article that compares a
I know that PETSc has hooks for Euclid but I discovered today that it does
not support 64 bit indices, which many MOOSE applications need. This would
probably be more appropriate for a hypre support forum (does anyone know if
such a forum exists other than opening GitHub issues?), but does anyone
Haha no I am not sure. There are a few other preconditioning options I will explore before knocking on this door some more. On Jun 22, 2023, at 6:49 PM, Matthew Knepley wrote:On Thu, Jun 22, 2023 at 8:37 PM Alexander Lindsay <alexlindsay...@gmail.com> wrote:I know that PETSc has hooks for
upport, but it was so buggy/leaky that we removed the interface.
Alexander Lindsay <alexlindsay...@gmail.com> writes:
> Haha no I am not sure. There are a few other preconditioning options I will explore before knocking on this door some more.
>
> On Jun 22, 2023, at 6:49 PM,
I do believe that based off the results in https://doi.org/10.1137/040608817 we
should be able to make LSC, with proper scaling, compare very favorably
with PCD
On Tue, Jun 27, 2023 at 10:41 AM Alexander Lindsay
wrote:
> I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 wh
Jolivet
wrote:
>
> On 23 Jun 2023, at 10:06 PM, Pierre Jolivet
> wrote:
>
>
> On 23 Jun 2023, at 9:39 PM, Alexander Lindsay
> wrote:
>
> Ah, I see that if I use Pierre's new 'full' option for
> -mat_schur_complement_ainv_type
>
>
> That was not initially don
, Jun 26, 2023 at 6:03 PM Alexander Lindsay
wrote:
> Returning to Sebastian's question about the correctness of the current LSC
> implementation: in the taxonomy paper that Jed linked to (which talks about
> SIMPLE, PCD, and LSC), equation 21 shows four applications of th
Based on https://github.com/hypre-space/hypre/issues/937 it sounds like
hypre-ILU is under active development and should be the one we focus on
bindings for. It does support 64 bit indices and GPU
On Fri, Jun 23, 2023 at 8:36 AM Alexander Lindsay
wrote:
> Thanks all for your replies. Mark,
des with increasing advection. Why is that?
On Wed, Jun 7, 2023 at 8:01 PM Jed Brown wrote:
> Alexander Lindsay writes:
>
> > This has been a great discussion to follow. Regarding
> >
> >> when time stepping, you have enough mass matrix that cheaper
> preconditi
I've opened https://gitlab.com/petsc/petsc/-/merge_requests/6642 which adds
a couple more scaling applications of the inverse of the diagonal of A
On Mon, Jun 26, 2023 at 6:06 PM Alexander Lindsay
wrote:
> I guess that similar to the discussions about selfp, the approximation of
> the ve
I guess it is because the inverse of the diagonal form of A00 becomes a
poor representation of the inverse of A00? I guess naively I would have
thought that the blockdiag form of A00 is A00
On Fri, Jun 23, 2023 at 10:18 AM Alexander Lindsay
wrote:
> Hi Jed, I will come back with answers to
Ah, I see that if I use Pierre's new 'full' option for
-mat_schur_complement_ainv_type that I get a single iteration for the Schur
complement solve with LU. That's a nice testing option
On Fri, Jun 23, 2023 at 12:02 PM Alexander Lindsay
wrote:
> I guess it is because the inverse of the diago
I've seen threads in the archives about nested field split but I'm not sure
they match what I'm asking about.
I'm doing a Schur field split for a porous version of incompressible
Navier-Stokes. In addition to pressure and velocity fields, we have fluid
and solid temperature fields. I plan to put
2
> should be optional (I can't remember if it is smart enough to allow not
> listing them)
>
> If you have a staggered grid then indicating the fields is trickery (since
> you don't have the simple u,v,t,p layout of the degrees of freedom)
>
>
>
> > On May 17,
Thanks Matt. For the immediate present I will probably use a basic line
search with a precheck, but if I want true line searches in the future I
will pursue option 2
On Thu, Nov 30, 2023 at 2:27 PM Matthew Knepley wrote:
> On Thu, Nov 30, 2023 at 5:08 PM Alexander Lindsay <
> al
its core U
is not some auxiliary vector; it represents true degrees of freedom.
On Thu, Nov 30, 2023 at 12:32 PM Barry Smith wrote:
>
> Why is this all not part of the function evaluation?
>
>
> > On Nov 30, 2023, at 3:25 PM, Alexander Lindsay
> wrote:
> >
> > H
I can do exactly what I want using SNESLineSearchPrecheck and
-snes_linesearch_type basic ... I just can't use any more exotic line
searches
On Thu, Nov 30, 2023 at 1:22 PM Alexander Lindsay
wrote:
> If someone passes me just L, where L represents the "global" degrees of
> freed
generally an arbitrary L). In order to get U1, I
>> must know both L0 and dL (and U0 of course). This is because at its core U
>> is not some auxiliary vector; it represents true degrees of freedom.
>>
>> On Thu, Nov 30, 2023 at 12:32 PM Barry Smith wrote:
>>
Hi I'm looking at the sources, and I believe the answer is no, but is there
a dedicated callback that is akin to SNESLineSearchPrecheck but is called
before *each* function evaluation in a line search method? I am using a
Hybridized Discontinuous Galerkin method in which most of the degrees of
I recently ran into some parallel crashes and valgrind suggests the issue
is with MUMPS. Has anyone else run into something similar recently?
==4022024== Invalid read of size 4
==4022024==at 0xF961266: dmumps_dr_assemble_local (dsol_distrhs.F:301)
==4022024==by 0xF961266:
This is from a MOOSE test. If I find the same error on something simpler I
will let you know
On Mon, Nov 20, 2023 at 12:56 PM Zhang, Hong wrote:
> Can you provide us a test code that reveals this error?
> Hong
> --
> *From:* petsc-users on behalf of
Yea I don’t think it’s a petsc issueSent from my iPhoneOn Nov 20, 2023, at 1:42 PM, Barry Smith wrote: Looks like memory allocated down in MUMPS and then accessed incorrectly inside MUMPS. Could easily not be PETSc relatedOn Nov 20, 2023, at 4:22 PM, Alexander Lindsay wrote:This is from
We also see this behavior quite frequently in MOOSE applications that have
physics that generate residuals of largely different scales. Like Matt said
non-dimensionalization would help a lot. Without proper scaling for some of
these types of problems, even when the GMRES iteration converges the
mprovements.
>
> Monotone in this case is that your matrix is positive semidefinite; x^TMx
> >= 0 for
> all x. For M symmetric, this is the same as M having all nonnegative
> eigenvalues.
>
> Todd.
>
> > On Oct 28, 2019, at 11:14 PM, Alexander Li
83 matches
Mail list logo