ex1 without any hypen options, I got
>
> Norm of error 2.47258e-15, Iterations 5
>
> It looks like the KSPSolver use 5 iterations to reach convergence, but why
> when mpi_linear_solver_server is enabled, it uses 1?
>
> I hope to get some help on these issues, thank you!
>
It is less a question of what KSP and PC support running with CUDA and more
a question of what parts of each KSP and PC run with CUDA (and which parts
don't causing memory traffic back and forth between the CPU and GPU).
Generally speaking, all the PETSc Vec operations run on CUDA.
is it correct?
>
> Best,
> Yongzhong
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Tuesday, April 23, 2024 at 3:35 PM
> To: Yongzhong Li <mailto:yongzhong...@mail.utoronto.ca>>
> Cc: petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>
> ma
as netlib
> and intel-mkl will help?
>
> Best,
> Yongzhong
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Monday, April 22, 2024 at 4:20 PM
> To: Yongzhong Li <mailto:yongzhong...@mail.utoronto.ca>>
> Cc: petsc-users@mcs.anl.gov <
PETSc provided solvers do not directly use threads.
The BLAS used by LAPACK and PETSc may use threads depending on what BLAS is
being used and how it was configured.
Some of the vector operations in GMRES in PETSc use BLAS that can use
threads, including axpy, dot, etc. For
the hybrid mode of
> computation. Attached image shows the scaling on a single node.
>
> Thanks,
> Cho
> From: Ng, Cho-Kuen mailto:c...@slac.stanford.edu>>
> Sent: Saturday, August 12, 2023 8:08 AM
> To: Jacob Faibussowitsch mailto:jacob@gmail.com>>
> Cc: Bar
seems to get lost somewhere.
>
> I have to see again if there is already a problem when I make petsc check, or
> if it is just in my program later.
> Not quite sure anymore.
>
>
> I will write back next week, Frank
>
>
>
>
>
>> On 5 Apr 2024, at 19:47
Please send the entire configure.log
> On Apr 5, 2024, at 3:42 PM, Vanella, Marcos (Fed) via petsc-users
> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi all, we are trying to compile PETSc in Frontier using the structured
>
I see what you are talking about in the blas checks. However those checks
"don't really matter" in that configure still succeeds.
Do you have a problem later with libnvJitLink.so
Thanks for the configure. log Send the configure. log for the failed nvJitlink problem. > On Apr 5, 2024, at 12: 58 PM, Frank Bramkamp wrote: > > Hi Barry, > > Here comes the latest configure. log file
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
Frank, Could you send the final, successful configure. log. I want to see if PETSc ever mucks with it later in the configure process/ Barry > On Apr 5, 2024, at 10: 44 AM, Frank Bramkamp wrote: > > > Dear
ZjQcmQRYFpfptBannerStart
This Message Is From an External
There was a bug in my attempted fix so it actually did not skip the option. Try git pull and then run configure again. > On Apr 5, 2024, at 6: 30 AM, Frank Bramkamp wrote: > > Dear Barry, > > I tried
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
Frank,
Please try the PETSc git branch barry/2024-04-04/rm-lnvc-link-line/release
This will hopefully resolve the -lnvc issue. Please let us know and we can
add the fix to our current release.
Barry
> On Apr 4, 2024, at 9:37 AM, Frank Bramkamp wrote:
>
> This Message Is From
Please send configure.log
We do not explicitly include libnvc but as Satish noted it may get listed
when configure is generating link lines.
With configure.log we'll know where it is being included (and we may be able
to provide a fix that removes it explicitly since it is
Note, you can also run with the option -mat_view and it will print each
matrix that gets assembled.
Also in the debugger you can do call MatView(mat,0)
> On Apr 1, 2024, at 2:18 PM, Matthew Knepley wrote:
>
> This Message Is From an External Sender
> This message came from outside your
OCKING=1 USE_OPENMP=0'" --download-parmmg --download-pastix --download-pnetcdf --download-pragmatic --download-ptscotch --download-scalapack --download-slepc --download-suitesparse --download-superlu_dist --download-tetgen --download-triangle --with-c2html=0 --with-debugging=1 --with-fortran-bindings=0
Can you check the value of IRHSCOMP in the debugger? Using gdb as the
debugger may work better for this.
Barry
> On Mar 30, 2024, at 3:46 AM, zeyu xia wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi! Thanks for your reply.
>
t Cowie
Alex Lindsay
Barry Smith
Blanca Mellado Pinto
David Andrs
David Kamensky
David Wells
Fabien Evard
Fande Kong
Hansol Suh
Hong Zhang
Ilya Fursov
James Wright
Jed Brown
Jeongu Kim
Jeremy L Thompson
Jeremy Theler
Jose Roman
Junchao Zhang
Koki Sagiyama
Lars Bilke
Lisandro Dalcin
Mark Adams
mcs.anl.gov>> on behalf of Zou, Ling via
> petsc-users mailto:petsc-users@mcs.anl.gov>>
> Date: Friday, March 29, 2024 at 2:06 PM
> To: Barry Smith mailto:bsm...@petsc.dev>>, Zhang, Hong
> mailto:hzh...@mcs.anl.gov>>
> Cc: petsc-users@mcs.anl.gov <mailt
>
> Best,
>
> -Ling
>
> From: Zhang, Hong
> Date: Thursday, March 28, 2024 at 4:59 PM
> To: Zou, Ling , Barry Smith
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] Does ILU(15) still make sense or should just use
> LU?
>
> Ling,
&g
the
> connection between physics (the problem we are dealing with) to math (the
> correct combination of those preconditioners).
>
> -Ling
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Thursday, March 28, 2024 at 1:09 PM
> To: Zou, Ling mailto:l...@anl.g
ue residual did
> not go down, even with 300 linear iterations.
> PS2: what do you think if it will be beneficial to have more detailed
> discussions (e.g., a presentation?) on the problem we are solving to seek
> more advice?
>
> -Ling
>
> From: Barry Smith mailto:bsm...
This is a bad situation, the solver is not really converging. This can
happen with ILU() sometimes, it so badly scales things that the preconditioned
residual decreases a lot but the true residual is not really getting smaller.
Since your matrices are small best to stick to LU.
You can
ntel/oneAPI/mkl/2023.2.0/lib/intel64
> > > > > > mkl-intel-lp64-dll.lib mkl-sequential-dll.lib mkl-core-dll.lib
> > > > > > --with-mpi-include=/cygdrive/g/Intel/oneAPI/mpi/2021.10.0/include
> > > > > > --with-mpi-lib=/cygdrive/g/Intel/
> On Mar 21, 2024, at 6:35 PM, Jed Brown wrote:
>
> Barry Smith writes:
>
>> In my limited understanding of the Fortran iso_c_binding, if we do not
>> provide an equivalent Fortran stub (the user calls) that uses the
>> iso_c_binding to call PETSc C code, th
> On Mar 21, 2024, at 5: 19 PM, Jed Brown wrote: > > Barry Smith writes: > >> We've always had some tension between adding new features to bfort vs developing an entirely new
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
Martin, Thanks for the suggestions and offer. The tool we use for automatically generating the Fortran stubs and interfaces is bfort. Its limitations include that it cannot handle string arguments automatically and cannot generate more than
ZjQcmQRYFpfptBannerStart
/matrix.c:2408
> [1]PETSC ERROR: #3 MatSetValuesStencil() at
> /home/lei/Software/PETSc/petsc-3.20.4/src/mat/interface/matrix.c:1762
>
> Is it not possible to set values across processors using MatSetValuesStencil?
> If I want to set values of the matrix across processors, what sh
The output is correct (only confusing). For PETSc DMDA by default viewing a
parallel matrix converts it to the "natural" ordering instead of the PETSc
parallel ordering.
See the Notes in
Please switch to the latest PETSc version, it supports Metis and Parmetis on
Windows.
Barry
> On Mar 17, 2024, at 11:57 PM, 程奔 <202321009...@mail.scut.edu.cn> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
>
> Recently I try
I would just avoid the --download-openblas option. The BLAS/LAPACK provided
by Apple should perform fine, perhaps even better than OpenBLAS on your system.
> On Mar 17, 2024, at 9:58 AM, Zongze Yang wrote:
>
> This Message Is From an External Sender
> This message came from outside your
> On Mar 15, 2024, at 9:53 AM, Frank Bramkamp wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Dear PETSc Team,
>
> I am using the latest petsc version 3.20.5.
>
>
> I would like to create a matrix using
> MatCreateSeqAIJ
>
> To
Sorry no one responded to this email sooner.
> On Mar 12, 2024, at 4:18 AM, Marco Seiz wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
>
>
> I'd like to solve a Stokes-like equation with PETSc, i.e.
>
>
> div( mu *
> On Mar 12, 2024, at 11: 54 PM, adigitoleo (Leon) wrote: > >> You need to ./configure PETSc for HDF5 using >> >>> --with-fortran-bindings=0 --with-mpi-dir=/usr --download-hdf5 >>
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This
You need to ./configure PETSc for HDF5 using
> --with-fortran-bindings=0 --with-mpi-dir=/usr --download-hdf5
It may need additional options, if it does then rerun the ./configure with
the additional options it lists.
> On Mar 12, 2024, at 8:19 PM, adigitoleo (Leon) wrote:
>
> This
> On Mar 10, 2024, at 10:16 AM, Yi Hu wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Dear Mark,
>
> Thanks for your reply. I see this mismatch. In fact my global DoF is 324. It
> seems like I always get the local size = global Dof /
Clarify in documentation which routines copy the provided values
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7336__;!!G_uCfscf7eWS!Z2j2ynvPrOvIKRuhQdogedQsz8yANmf4jbsUEXvXz2afujWJLfsAdG45b-wsInVpXM3tB3H6h1vTdYbzamBV5hs$
> On Mar 2, 2024, at 6:40 AM, Fabian
Are you forming the Jacobian for the first and second order cases inside of
Newton?
You can run both with -log_view to see how much time is spent in the various
events (compute function, compute Jacobian, linear solve, ...) for the two
cases and compare them.
> On Mar 3, 2024, at
Reminder: the next PETSc annual meeting will be held May 23-24 in Cologne,
Germany.
Please register at
We don't consider drop tolerance preconditioners as reliable or robust so I
don't see anyone implementing it.
> On Feb 27, 2024, at 10:11 AM, Константин Мурусидзе
> wrote:
>
> Thank you! Should we expect it to appear in the near future?
>
>
> 27.02.2024
I'm sorry for the confusion. PETSc does not have a drop tolerance ILU so
this function does nothing as you've noted.
Barry
> On Feb 27, 2024, at 7:27 AM, Константин Мурусидзе
> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
ith incorrect Jacobians so just having Newton converge does not mean that the Jacobian is correct.
>
> In fact my code has a converged and reasonable result for various cases. I guess Jfd is an approximation, so I could still have a possibly correct Jacobian.
>
> Best wishes,
>
> Yi
>
Use SNESSetUpdate() to provide a callback function that gets called by SNES
automatically immediately before each linear solve. Inside your callback use
SNESGetFunction(snes,f,NULL,NULL); to access the last computed value of your
function, from this you can update your global variable.
The new code is in https://gitlab.com/petsc/petsc/-/merge_requests/7293 and
retains the null space on the submatrices for both MatZeroRows() and
MatZeroRowsAndColumns() regardless of changes to the nonzero structure of the
matrix.
Barry
> On Feb 13, 2024, at 7:12 AM, Jeremy Theler
Thank you for the code.
A) By default MatZeroRows() does change the nonzero structure of the matrix
B) PCFIELDSPLIT loses the null spaces attached to the submatrices if the
nonzero structure of the matrix changes.
For the example code if one sets MatSetOption(A,MAT_KEEP_NONZERO_PATTERN,
7279 does change the code for MatZeroRowsColumns_MPIAIJ(). But perhaps that
does not resolve the problem you are seeing? If that is the case we will need a
reproducible example so we can determine exactly what else is happening in your
code to cause the difficulties.
Here is the diff for
error while loading shared libraries: libmpi_ibm_usempif08.so: cannot open
shared object file: No such file or directory
So using the mpif90 does not work because it links a shared library that
cannot be found at run time.
Perhaps that library is only visible on the bach nodes. You can
The bug fix for 2 is availabel in
https://gitlab.com/petsc/petsc/-/merge_requests/7279
> On Feb 9, 2024, at 10:50 AM, Barry Smith wrote:
>
>
> 1. Code going through line 692 looses the near nullspace of the matrices
> attached to the sub-KSPs
> 2. The call to
1. Code going through line 692 looses the near nullspace of the matrices
attached to the sub-KSPs
2. The call to MatZeroRowsColumns() changes then non-zero structure for MPIAIJ
but not for SEQAIJ
(unless MAT_KEEP_NONZERO_PATTERN is used)
MatZeroRowsColumns() manual page states:
Unlike
ocs before solving?
>
> Regards,
> Maruthi
>
> On Mon, Feb 5, 2024 at 2:18 AM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>>Is each rank trying to create its own sequential matrix with
>> MatCreateSeqAIJWithArrays() or did you mean MatCreateMP
teJacobian routine is not guaranteed to be exactly the
vector you passed into SNESSolve() as the solution (it could potentially be
some temporary work vector). I think it is best not to rely on the vector
always being the same.
> Best regards,
>
> Yi
>
> On 2/6/24 01
> FormJacobianShell(), then the code seems to work, meaning that the base
> vector is passed to shell matrix context behind the scene.
>
> Best regards,
>
> Yi
>
> On 2/5/24 19:09, Barry Smith wrote:
>>
>> Send the entire code.
>>
>>
>>> On Feb
ed with
>
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Pointer: Parameter # 1
>
> In MyMult, I actually defined x to
Is each rank trying to create its own sequential matrix with
MatCreateSeqAIJWithArrays() or did you mean MatCreateMPIAIJWithArrays()?
If the latter, then possibly one of your size arguments is wrong or the
indices are incorrect for the given sizes.
Barry
> On Feb 4, 2024, at 3:15
The Fortran "stubs" (subroutines) should be in
$PETSC_ARCH/src/vec/is/section/interface/ftn-auto/sectionf.c and compiled and
linked into the PETSc library.
The same tool that builds the interfaces in
$PETSC_ARCH/src/vec/f90-mod/ftn-auto-interfaces/petscpetscsection.h90, also
builds the
Call MatSetOption(mat,MAT_IGNORE_ZERO_ENTRIES,PETSC_TRUE)
For each column call VecGetArray(), zero the "small entries", then call
MatSetValues() for that single column.
Barry
> On Feb 2, 2024, at 12:28 PM, TARDIEU Nicolas via petsc-users
> wrote:
>
> Dear PETSc users,
>
> I
hen in the fieldsplit
process LU can be used for A so it does not need a good preconditioner. So what
is the structure of D?
>
> On 1/31/24 18:45, Barry Smith wrote:
>>For large problems, preconditioners have to take advantage of some
>> underlying mathematical stru
For large problems, preconditioners have to take advantage of some
underlying mathematical structure of the operator to perform well (require few
iterations). Just black-boxing the system with simple preconditioners will not
be effective.
So, one needs to look at the Liouvillian
in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> === start mymult ===
> === done mymult ===
> Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH it
How do I see a difference? What does "hence ruin my previous converged KSP
result" mean? A different answer at the end of the KSP solve?
$ ./joe > joe.basic
~/Src/petsc/src/ksp/ksp/tutorials (barry/2023-09-15/fix-log-pcmpi=)
arch-fix-log-pcmpi
$ ./joe -ksp_monitor -ksp_converged_reason
This is a problem with MPI programming and optimization; I am unaware of a
perfect solution.
Put the design variables into the solution vector on MPI rank 0, and when
doing your objective/gradient, send the values to all the MPI processes where
you use them. You can use a VecScatter to
Document the change in behavior for matrices with a block size greater than one
https://gitlab.com/petsc/petsc/-/merge_requests/7246
> On Jan 27, 2024, at 3:37 PM, Mark Adams wrote:
>
> Note, pc_redistibute is a great idea but you lose the block size, which is
> obvious after you realize
ferent numbers
> reported by the GPU and Total flop/s columns and why the GPU flop/s are
> always higher than the Total flop/s ?
> Or am I missing something?
>
> Thank you for your attention.
> Anthony Jourdon
>
>
>
> Le sam. 20 janv. 2024 à 02:25, Barry Smit
This could happen if the values in the vector get changed but the
PetscObjectState does not get updated. Normally this is impossible, any action
that changes a vectors values changes its state (so for example calling
VecGetArray()/VecRestoreArray() updates the state.
Are you accessing
there
is no LogEventBegin/End() for VecShift which is why it doesn't get it on line
in the -log_view).
Barry
> On Jan 19, 2024, at 3:17 PM, Barry Smith wrote:
>
>
> Junchao
>
> I run the following on the CI machine, why does this happen? With trivial
> solver options it
Junchao
I run the following on the CI machine, why does this happen? With trivial
solver options it runs ok.
bsmith@petsc-gpu-02:/scratch/bsmith/petsc/src/ksp/ksp/tutorials$ ./ex34
-da_grid_x 192 -da_grid_y 192 -da_grid_z 192 -dm_mat_type seqaijhipsparse
-dm_vec_type seqhip -ksp_max_it
Nans indicate we do not have valid computational times for these operations;
think of them as Not Available. Providing valid times for the "inner"
operations listed with Nans requires inaccurate times (higher) for the outer
operations, since extra synchronization between the CPU and GPU
Generally fieldsplit is used on problems that have a natural "split" of the
variables into two or more subsets. For example u0,v0,u1,v1,u2,v2,u3,v4 This is
often indicated in the vectors and matrices with the "blocksize" argument, 2 in
this case. DM also often provides this information.
Thanks. Same version I tried.
> On Jan 18, 2024, at 6:09 PM, Yesypenko, Anna wrote:
>
> Hi Barry,
>
> I'm using version 3.20.3. The tacc system is lonestar6.
>
> Best,
> Anna
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Sent: Thursday, January 18
to try to reproduce?
Barry
> On Jan 18, 2024, at 4:38 PM, Barry Smith wrote:
>
>
>It is using the hash map system for inserting values which only inserts on
> the CPU, not on the GPU. So I don't see that it would be moving any data to
> the GPU until the mat
A.setValue(index, index - 1, -1)
>> A.setValue(index, index, 2)
>> A.setValue(index, index + 1, -1)
>> A.assemble()
>> ```
>> If it means anything to you, when the hash error occurs, it is for index
>> 67283 after filling 201851 nonzero valu
don't think it should be running out of
memory. I cannot reproduce the crash with same parameters on my non-CUDA
machine so debugging will be tricky.
Barry
> On Jan 18, 2024, at 3:35 PM, Barry Smith wrote:
>
>
>Do you ever get a problem with 'aij` ? Can you r
Do you ever get a problem with 'aij` ? Can you run in a loop with 'aij' to
confirm it doesn't fail then?
Barry
> On Jan 17, 2024, at 4:51 PM, Yesypenko, Anna wrote:
>
> Dear Petsc users/developers,
>
> I'm experiencing a bug when using petsc4py with GPU support. It may be my
Looks like you are using an older version of PETSc. Could you please switch
to the latest and try again and send same information if that also fails.
Barry
> On Jan 18, 2024, at 12:59 PM, Peder Jørgensgaard Olesen via petsc-users
> wrote:
>
> Hello,
>
> I need to determine the full
The PETSc petsclog.h (included by petscsys.h) uses C macro magic to log
calls to MPI routines. This is how the symbol is getting into your code. But
normally
if you use PetscInitialize() and link to the PETSc library the symbol would get
resolved.
If that part of the code does not
e gdb , it
> does not show me the location of error
> The code : sshatanawi/SS_GWM (github.com)
> <https://github.com/sshatanawi/SS_GWM>
>
> I really appreciate your helps
>
> Sawsan
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Sent: Wednesday,
om/kevinwgy/m2c/blob/main/LinearSystemSolver.cpp
>
> I am using PETSc 3.12.4.
>
> Thanks!
> Kevin
>
>
> On Thu, Jan 11, 2024 at 12:26 PM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>>Trying again.
>>
>> Normally, numIts
Trying again.
Normally, numIts would be one less than nEntries since the initial residual
is computed (and stored in the history) before any iterations.
Is this what you are seeing or are you seeing other values for the two?
I've started a run of the PETSc test suite that
Take a look at the discussion in
https://petsc.gitlab.io/-/petsc/-/jobs/5814862879/artifacts/public/html/manual/streams.html
and I suggest you run the streams benchmark from the branch
barry/2023-09-15/fix-log-pcmpi on your machine to get a baseline for what kind
of speedup you can expect.
d build my jac'
>
> end subroutine formJacobian
>
> it turns out that no matter by a simple assignment or MatCopy(), the compiled
> program gives me the same error as before. So I guess the real jacobian is
> still not set. I wonder how to get around this and let
Shatanawi, Sawsan Muhammad via petsc-users
>> mailto:petsc-users@mcs.anl.gov>> wrote:
>> Hello Matthew,
>>
>> Thank you for your help. I am sorry that I keep coming back with my error
>> messages, but I reached a point that I don't know how to fix them, and I
Sorry, sent the message too quickly. What I said below is incorrect.
Barry
> On Jan 10, 2024, at 8:30 PM, Barry Smith wrote:
>
>
> Kevin,
>
> The manual page is misleading. It actually returns the total length of
> the history as set by KSPSetResidualHist
Kevin,
The manual page is misleading. It actually returns the total length of the
history as set by KSPSetResidualHistory(), not the number of iterations.
Barry
> On Jan 10, 2024, at 7:09 PM, Kevin G. Wang wrote:
>
> Hello everyone!
>
> I am writing a code that uses PETSc/KSP to
cs.anl.gov>> wrote:
> Hello Matthew,
>
> Thank you for your help. I am sorry that I keep coming back with my error
> messages, but I reached a point that I don't know how to fix them, and I
> don't understand them easily.
> The list of errors is getting shorter, now I am getting
erface/snes.c:2864
> [0]PETSC ERROR: #13 SNESSolve_NEWTONLS() at
> /home/yi/app/petsc-3.16.4/src/snes/impls/ls/ls.c:222
> [0]PETSC ERROR: #14 SNESSolve() at
> /home/yi/app/petsc-3.16.4/src/snes/interface/snes.c:4809
>
> It seems that I have to use a DMSNESSetJacobianLocal() to
sing to test this.
> The output matrix is saved in separate ascii files.
> You can use “make noflux” to compile the code.
>
> Gourav
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Saturday, January 6, 2024 at 7:08 PM
> To: Gourav Kumbhojkar <mailto:go
ces may be wrong. I wonder do I need to use the local Vec (of dF), and
> should my output Vec also be in the correct shape (i.e. after calculation I
> need to transform back into a Vec)? As you can see here, my dF is a tensor
> defined on every grid point.
>
> Best wishes,
> Y
est this.
> The output matrix is saved in separate ascii files.
> You can use “make noflux” to compile the code.
>
> Gourav
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Saturday, January 6, 2024 at 7:08 PM
> To: Gourav Kumbhojkar <mailto:gour
"formJacobian" should not be __creating__ the matrices. Here "form" means
computing the numerical values in the matrix (or when using a shell matrix it
means keeping a copy of X so that your custom matrix-free multiply knows the
base location where the matrix free Jacobian-vector products
Added clarification to the man pages in
https://gitlab.com/petsc/petsc/-/merge_requests/7170
> On Jan 8, 2024, at 4:31 AM, Deuse, Mathieu via petsc-users
> wrote:
>
> Hello,
>
> I have a piece of code which generates a matrix in CSR format, but the
> without sorting the column indexes
> On Jan 8, 2024, at 4:31 AM, Deuse, Mathieu via petsc-users
> wrote:
>
> Hello,
>
> I have a piece of code which generates a matrix in CSR format, but the
> without sorting the column indexes in increasing order within each row. This
> seems not to be 100% compatible with the MATMPIAIJ
with the 2D implementation of the mirror boundary.
> The row 0 values are -4, 2, and 2 as expected.
>
> Let me know if I should give any other information about this. I also thought
> about using DM_BOUNDARY_GHOSTED and implement the mirror boundary in 3D from
> scratch but I would
Yes, the handling of BoomerAMG options starts at line 365. If we don't
support what you want but hypre has a function call that allows one to set the
values then the option could easily be added to the PETSc options database here
either by you (with a merge request) or us. So I would say
Are you referring to the text?
. `DM_BOUNDARY_MIRROR` - the ghost value is the same as the value 1 grid point
in; that is, the 0th grid point in the real mesh acts like a mirror to define
the ghost point value; not yet implemented for 3d
Looking at the code for
https://gitlab.com/petsc/petsc/-/merge_requests/7135
Regex processing is not ideal for this task; I've modified the code to remove
most false positive finds.
Thanks for reporting the problem,
Barry
> On Dec 21, 2023, at 8:35 AM, Niclas Götting
> wrote:
>
> Hi all,
>
> I noticed that all
Thanks for letting us know, we'll take a look at it.
Barry
> On Dec 21, 2023, at 8:35 AM, Niclas Götting
> wrote:
>
> Hi all,
>
> I noticed that all links to the examples under
> https://petsc.org/release/manualpages/TS/TS/ point to the wrong URL. Instead
> of src/ts/**/*, they
Instead of
call PCCreate(PETSC_COMM_WORLD, pc, ierr)
call PCSetType(pc, PCILU,ierr) ! Choose a preconditioner type (ILU)
call KSPSetPC(ksp, pc,ierr) ! Associate the preconditioner with the KSP
solver
do
call KSPGetPC(ksp,pc,ierr)
call PCSetType(pc, PCILU,ierr)
Do not
I apologize; please ignore my answer below. Use MatCreateShell() as
indicated by Jed.
> On Dec 20, 2023, at 2:14 PM, Barry Smith wrote:
>
>
>
>> On Dec 20, 2023, at 11:44 AM, Yi Hu wrote:
>>
>> Dear Jed,
>>
>> Thanks for your reply. I have a
> On Dec 20, 2023, at 11:44 AM, Yi Hu wrote:
>
> Dear Jed,
>
> Thanks for your reply. I have an analytical one to implement.
>
> Best, Yi
>
> -Original Message-
> From: Jed Brown
> Sent: Wednesday, December 20, 2023 5:40 PM
> To: Yi Hu ; petsc-users@mcs.anl.gov
> Subject: Re:
t; external func
> SNESSetFunction(snes, r, func, ctx, ierr)
> SNES snes
> Vec r
> PetscErrorCode ierr
> type (userctx) user
>
>
>
> On Tue, Dec 12, 2023 at 7:10 PM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>> See
>> https:
1 - 100 of 5164 matches
Mail list logo