> rows=25, cols=25
> total: nonzeros=105, allocated nonzeros=105
> total number of mallocs used during MatSetValues calls=0
> has attached null space
> not using I-node routines
> linear system matrix = precond ma
LL,);
PCFIELDSPLIT is suppose to snag this null space that you provided and use it
on the Shur system. If you run with -ksp_view it should list what matrices have
an attached null space.
>
> Thanks,
> Colton
>
> On Thu, May 23, 2024 at 12:55 PM Barry Smith <mailto:bsm...@petsc.dev>
gt; On Wed, May 29, 2024 at 12:27 AM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>>
>>> On May 28, 2024, at 12:08 PM, Runjian Wu >> <mailto:wurunj...@gmail.com>> wrote:
>>>
>>> This Message Is From an External Sender
>>
You can use MatConvert()
> On May 29, 2024, at 10:53 AM, Frank Bramkamp wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello Hong,
>
> Thank you for the clarification.
> If I already have a BAIJ matrix format, can I then convert it
Great.
> On May 28, 2024, at 9:33 PM, Adrian Croucher
> wrote:
>
> Thanks again Barry, it's working fine for me now too.
>
> - Adrian
>
> On 29/05/24 1:27 pm, Barry Smith wrote:
>>
>>There was a bug in my fix for parallel which I have fixed. You w
n to the matrix. Do you see that behaviour too?
>
> - Adrian
>
> On 29/05/24 4:33 am, Barry Smith wrote:
>>
>>Adrian,
>>
>>I could reproduce with 3 MPI ranks.
>>
>>Another error I had to fix. I also added a test example
>>
>&g
> So I don't think this should hold up merging your bugfix.
>
> - Adrian
>
> On 28/05/24 2:13 pm, Barry Smith wrote:
>>
>> When I run the exact code you sent with two ranks and—mat_type mpibaij, it
>> runs as expected. If you modified the code in an
> On May 28, 2024, at 12:08 PM, Runjian Wu wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi all,
>
> I have two questions about this information.
> "DMGlobalToLocal() is a short form of DMGlobalToLocalBegin() and
>
obal block indices (0,7). It views the matrix before
>>> and after the insertion.
>>>
>>> If I run with "-dm_mat_type aij" it gives the expected results, but with
>>> "-dm_mat_type baij" it doesn't - e.g. if run in serial, it adds the new
Alberto,
You need to construct a matrix and pass it in as the second argument to
solver.setJacobianEquality(myEqualityJacobian, JE)
Barry
> On May 27, 2024, at 11:58 AM, Paganini, Alberto D.M. (Dr.)
> wrote:
>
> This Message Is From an External Sender
> This message came from
nonzeros in the right place but also adds a whole lot of other duplicated
> entries in block row 0.
>
> Is there something I'm not understanding about BAIJ, or about
> MatSetValuesBlocked()? or possibly some other mistake?
>
> - Adrian
>
> On 20/05/24 12:24 pm, Barry Smith wr
External Sender
>> This message came from outside your organization.
>>
>> Barry Smith mailto:bsm...@petsc.dev>> writes:
>>
>> >Unfortunately it cannot automatically because
>> > -pc_fieldsplit_detect_saddle_point just grabs part of
when the matrix is being created. Does this
> information get passed to the submatrices of the fieldsplit?
>
> -Colton
>
> On Thu, May 23, 2024 at 12:36 PM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>>Ok,
>>
>> So what is happ
,
>
> I saw that was reporting as an unused option and the error message I sent was
> run with -fieldsplit_0_ksp_type preonly.
>
> -Colton
>
> On Thu, May 23, 2024 at 12:13 PM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>>
>>Sorry I gave the wr
omputed residual norm 6.86309e-06 at restart, residual norm
> at start of cycle 2.68804e-07
>
> The rest of the error is identical.
>
> On Thu, May 23, 2024 at 10:46 AM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>> Use -pc_fieldsplit_0_ksp_type preonly
&g
ROR: #11 KSPSolve() at
> /home/colton/petsc/src/ksp/ksp/interface/itfunc.c:1078
> [0]PETSC ERROR: #12 solveStokes() at cartesianStokesGrid.cpp:1403
>
>
>
> On Thu, May 23, 2024 at 10:33 AM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>> Run the failing case with al
ehow violating a solvability condition of the problem?
>
> Thanks for the help!
>
> -Colton
>
> On Wed, May 22, 2024 at 6:09 PM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>> Thanks for the info. I see you are using GMRES inside t
alues calls=0
> using I-node routines: found 15150 nodes, limit used is 5
> linear system matrix = precond matrix:
> Mat Object: (back_) 1 MPI process
> type: seqaij
> rows=45150, cols=45150
> total: nonzeros=673650, allocated nonzeros=673650
>
Are you using any other command line options or did you hardwire any solver
parameters in the code with, like, KSPSetXXX() or PCSetXXX() Please send all of
them.
Something funky definitely happened when the true residual norms jumped up.
Could you run the same thing with -ksp_view and
nts representing
> interaction between the sources)?
>
> - Adrian
>
> On 20/05/24 12:41 pm, Matthew Knepley wrote:
>> On Sun, May 19, 2024 at 8:25 PM Barry Smith > <mailto:bsm...@petsc.dev>> wrote:
>>> This Message Is From an External Sender
>>>
You can call MatSetOption(mat,MAT_NEW_NONZERO_LOCATION_ERR) then insert the
new values. If it is just a handful of new insertions the extra time should be
small.
Making a copy of the matrix won't give you a new matrix that is any faster
to insert into so best to just use the same
> adding a function to automatically remove duplicates at the end of
> "configure" in the next PETSc version?
>
>
> Runjian
>
>
>
> On 5/13/2024 9:31 PM, Barry Smith wrote:
>>
>>Because the order of the libraries can be important, it is d
; In the counter function VecGetArrayRead(..), if I write values, will the
> compiler report an error?
>
> Runjian
>
> On Mon, May 13, 2024 at 9:36 PM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>>I couldn't find a way in Fortran to declare an a
Depending on your mpi mpiexec is not needed so
compute-sanitizer --tool memcheck --leak-check full ./a.out args
may work
> On May 13, 2024, at 8:16 PM, Sreeram R Venkat wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> I am trying
I couldn't find a way in Fortran to declare an array as read-only. Is there
such support?
Barry
> On May 13, 2024, at 7:28 AM, Runjian Wu wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi all,
>
> I have a question about
Because the order of the libraries can be important, it is difficult for
./configure to remove unneeded duplicates automatically.
You can manually remove duplicates by editing
$PETSC_ARCH/lib/petsc/conf/petscvariables after running ./configure
Barry
> On May 13, 2024, at 7:47 AM,
ludes the boundary conditions
>
>
> Please find the attached file contains a draft of my code
>
> Thank you in advance for your time and help.
>
> Best regards,
>
> Sawsan
>
>
> From: Shatanawi, Sawsan Muhammad <mailto:sawsan.shatan...@wsu.edu>>
&
> On May 6, 2024, at 8:38 AM, Mark Adams wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> I don't know why this should have changed, but you can either not feed -v to
> PETSc (a pain probably), use PETSc's getOptions methods instead of
arch=native --with-cxx=g++
> --download-openmpi --download-superlu --download-opencascade
> --with-openblas-include=${OPENBLAS_INC} --with-openblas-lib=${OPENBLAS_LIB}
> --with-threadsafety --with-log=0 --with-openmp
>
> I didn’t have this issue when I configured PETSc usin
--with-x=0
> On Apr 29, 2024, at 12:05 PM, Vanella, Marcos (Fed) via petsc-users
> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi Satish,
> Ok thank you for clarifying. I don't need to include Metis in the config
> phase then
ex1 without any hypen options, I got
>
> Norm of error 2.47258e-15, Iterations 5
>
> It looks like the KSPSolver use 5 iterations to reach convergence, but why
> when mpi_linear_solver_server is enabled, it uses 1?
>
> I hope to get some help on these issues, thank you!
>
It is less a question of what KSP and PC support running with CUDA and more
a question of what parts of each KSP and PC run with CUDA (and which parts
don't causing memory traffic back and forth between the CPU and GPU).
Generally speaking, all the PETSc Vec operations run on CUDA.
is it correct?
>
> Best,
> Yongzhong
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Tuesday, April 23, 2024 at 3:35 PM
> To: Yongzhong Li <mailto:yongzhong...@mail.utoronto.ca>>
> Cc: petsc-users@mcs.anl.gov <mailto:petsc-users@mcs.anl.gov>
> ma
as netlib
> and intel-mkl will help?
>
> Best,
> Yongzhong
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Monday, April 22, 2024 at 4:20 PM
> To: Yongzhong Li <mailto:yongzhong...@mail.utoronto.ca>>
> Cc: petsc-users@mcs.anl.gov <
PETSc provided solvers do not directly use threads.
The BLAS used by LAPACK and PETSc may use threads depending on what BLAS is
being used and how it was configured.
Some of the vector operations in GMRES in PETSc use BLAS that can use
threads, including axpy, dot, etc. For
the hybrid mode of
> computation. Attached image shows the scaling on a single node.
>
> Thanks,
> Cho
> From: Ng, Cho-Kuen mailto:c...@slac.stanford.edu>>
> Sent: Saturday, August 12, 2023 8:08 AM
> To: Jacob Faibussowitsch mailto:jacob@gmail.com>>
> Cc: Bar
seems to get lost somewhere.
>
> I have to see again if there is already a problem when I make petsc check, or
> if it is just in my program later.
> Not quite sure anymore.
>
>
> I will write back next week, Frank
>
>
>
>
>
>> On 5 Apr 2024, at 19:47
Please send the entire configure.log
> On Apr 5, 2024, at 3:42 PM, Vanella, Marcos (Fed) via petsc-users
> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi all, we are trying to compile PETSc in Frontier using the structured
>
I see what you are talking about in the blas checks. However those checks
"don't really matter" in that configure still succeeds.
Do you have a problem later with libnvJitLink.so
Thanks for the configure. log Send the configure. log for the failed nvJitlink problem. > On Apr 5, 2024, at 12: 58 PM, Frank Bramkamp wrote: > > Hi Barry, > > Here comes the latest configure. log file
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
Frank, Could you send the final, successful configure. log. I want to see if PETSc ever mucks with it later in the configure process/ Barry > On Apr 5, 2024, at 10: 44 AM, Frank Bramkamp wrote: > > > Dear
ZjQcmQRYFpfptBannerStart
This Message Is From an External
There was a bug in my attempted fix so it actually did not skip the option. Try git pull and then run configure again. > On Apr 5, 2024, at 6: 30 AM, Frank Bramkamp wrote: > > Dear Barry, > > I tried
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
Frank,
Please try the PETSc git branch barry/2024-04-04/rm-lnvc-link-line/release
This will hopefully resolve the -lnvc issue. Please let us know and we can
add the fix to our current release.
Barry
> On Apr 4, 2024, at 9:37 AM, Frank Bramkamp wrote:
>
> This Message Is From
Please send configure.log
We do not explicitly include libnvc but as Satish noted it may get listed
when configure is generating link lines.
With configure.log we'll know where it is being included (and we may be able
to provide a fix that removes it explicitly since it is
Note, you can also run with the option -mat_view and it will print each
matrix that gets assembled.
Also in the debugger you can do call MatView(mat,0)
> On Apr 1, 2024, at 2:18 PM, Matthew Knepley wrote:
>
> This Message Is From an External Sender
> This message came from outside your
OCKING=1 USE_OPENMP=0'" --download-parmmg --download-pastix --download-pnetcdf --download-pragmatic --download-ptscotch --download-scalapack --download-slepc --download-suitesparse --download-superlu_dist --download-tetgen --download-triangle --with-c2html=0 --with-debugging=1 --with-fortran-bindings=0
Can you check the value of IRHSCOMP in the debugger? Using gdb as the
debugger may work better for this.
Barry
> On Mar 30, 2024, at 3:46 AM, zeyu xia wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hi! Thanks for your reply.
>
t Cowie
Alex Lindsay
Barry Smith
Blanca Mellado Pinto
David Andrs
David Kamensky
David Wells
Fabien Evard
Fande Kong
Hansol Suh
Hong Zhang
Ilya Fursov
James Wright
Jed Brown
Jeongu Kim
Jeremy L Thompson
Jeremy Theler
Jose Roman
Junchao Zhang
Koki Sagiyama
Lars Bilke
Lisandro Dalcin
Mark Adams
mcs.anl.gov>> on behalf of Zou, Ling via
> petsc-users mailto:petsc-users@mcs.anl.gov>>
> Date: Friday, March 29, 2024 at 2:06 PM
> To: Barry Smith mailto:bsm...@petsc.dev>>, Zhang, Hong
> mailto:hzh...@mcs.anl.gov>>
> Cc: petsc-users@mcs.anl.gov <mailt
>
> Best,
>
> -Ling
>
> From: Zhang, Hong
> Date: Thursday, March 28, 2024 at 4:59 PM
> To: Zou, Ling , Barry Smith
> Cc: petsc-users@mcs.anl.gov
> Subject: Re: [petsc-users] Does ILU(15) still make sense or should just use
> LU?
>
> Ling,
&g
the
> connection between physics (the problem we are dealing with) to math (the
> correct combination of those preconditioners).
>
> -Ling
>
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Date: Thursday, March 28, 2024 at 1:09 PM
> To: Zou, Ling mailto:l...@anl.g
ue residual did
> not go down, even with 300 linear iterations.
> PS2: what do you think if it will be beneficial to have more detailed
> discussions (e.g., a presentation?) on the problem we are solving to seek
> more advice?
>
> -Ling
>
> From: Barry Smith mailto:bsm...
This is a bad situation, the solver is not really converging. This can
happen with ILU() sometimes, it so badly scales things that the preconditioned
residual decreases a lot but the true residual is not really getting smaller.
Since your matrices are small best to stick to LU.
You can
ntel/oneAPI/mkl/2023.2.0/lib/intel64
> > > > > > mkl-intel-lp64-dll.lib mkl-sequential-dll.lib mkl-core-dll.lib
> > > > > > --with-mpi-include=/cygdrive/g/Intel/oneAPI/mpi/2021.10.0/include
> > > > > > --with-mpi-lib=/cygdrive/g/Intel/
> On Mar 21, 2024, at 6:35 PM, Jed Brown wrote:
>
> Barry Smith writes:
>
>> In my limited understanding of the Fortran iso_c_binding, if we do not
>> provide an equivalent Fortran stub (the user calls) that uses the
>> iso_c_binding to call PETSc C code, th
> On Mar 21, 2024, at 5: 19 PM, Jed Brown wrote: > > Barry Smith writes: > >> We've always had some tension between adding new features to bfort vs developing an entirely new
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
Martin, Thanks for the suggestions and offer. The tool we use for automatically generating the Fortran stubs and interfaces is bfort. Its limitations include that it cannot handle string arguments automatically and cannot generate more than
ZjQcmQRYFpfptBannerStart
/matrix.c:2408
> [1]PETSC ERROR: #3 MatSetValuesStencil() at
> /home/lei/Software/PETSc/petsc-3.20.4/src/mat/interface/matrix.c:1762
>
> Is it not possible to set values across processors using MatSetValuesStencil?
> If I want to set values of the matrix across processors, what sh
The output is correct (only confusing). For PETSc DMDA by default viewing a
parallel matrix converts it to the "natural" ordering instead of the PETSc
parallel ordering.
See the Notes in
Please switch to the latest PETSc version, it supports Metis and Parmetis on
Windows.
Barry
> On Mar 17, 2024, at 11:57 PM, 程奔 <202321009...@mail.scut.edu.cn> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
>
> Recently I try
I would just avoid the --download-openblas option. The BLAS/LAPACK provided
by Apple should perform fine, perhaps even better than OpenBLAS on your system.
> On Mar 17, 2024, at 9:58 AM, Zongze Yang wrote:
>
> This Message Is From an External Sender
> This message came from outside your
> On Mar 15, 2024, at 9:53 AM, Frank Bramkamp wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Dear PETSc Team,
>
> I am using the latest petsc version 3.20.5.
>
>
> I would like to create a matrix using
> MatCreateSeqAIJ
>
> To
Sorry no one responded to this email sooner.
> On Mar 12, 2024, at 4:18 AM, Marco Seiz wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Hello,
>
>
> I'd like to solve a Stokes-like equation with PETSc, i.e.
>
>
> div( mu *
> On Mar 12, 2024, at 11: 54 PM, adigitoleo (Leon) wrote: > >> You need to ./configure PETSc for HDF5 using >> >>> --with-fortran-bindings=0 --with-mpi-dir=/usr --download-hdf5 >>
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This
You need to ./configure PETSc for HDF5 using
> --with-fortran-bindings=0 --with-mpi-dir=/usr --download-hdf5
It may need additional options, if it does then rerun the ./configure with
the additional options it lists.
> On Mar 12, 2024, at 8:19 PM, adigitoleo (Leon) wrote:
>
> This
> On Mar 10, 2024, at 10:16 AM, Yi Hu wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
> Dear Mark,
>
> Thanks for your reply. I see this mismatch. In fact my global DoF is 324. It
> seems like I always get the local size = global Dof /
Clarify in documentation which routines copy the provided values
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7336__;!!G_uCfscf7eWS!Z2j2ynvPrOvIKRuhQdogedQsz8yANmf4jbsUEXvXz2afujWJLfsAdG45b-wsInVpXM3tB3H6h1vTdYbzamBV5hs$
> On Mar 2, 2024, at 6:40 AM, Fabian
Are you forming the Jacobian for the first and second order cases inside of
Newton?
You can run both with -log_view to see how much time is spent in the various
events (compute function, compute Jacobian, linear solve, ...) for the two
cases and compare them.
> On Mar 3, 2024, at
Reminder: the next PETSc annual meeting will be held May 23-24 in Cologne,
Germany.
Please register at
We don't consider drop tolerance preconditioners as reliable or robust so I
don't see anyone implementing it.
> On Feb 27, 2024, at 10:11 AM, Константин Мурусидзе
> wrote:
>
> Thank you! Should we expect it to appear in the near future?
>
>
> 27.02.2024
I'm sorry for the confusion. PETSc does not have a drop tolerance ILU so
this function does nothing as you've noted.
Barry
> On Feb 27, 2024, at 7:27 AM, Константин Мурусидзе
> wrote:
>
> This Message Is From an External Sender
> This message came from outside your organization.
>
ith incorrect Jacobians so just having Newton converge does not mean that the Jacobian is correct.
>
> In fact my code has a converged and reasonable result for various cases. I guess Jfd is an approximation, so I could still have a possibly correct Jacobian.
>
> Best wishes,
>
> Yi
>
Use SNESSetUpdate() to provide a callback function that gets called by SNES
automatically immediately before each linear solve. Inside your callback use
SNESGetFunction(snes,f,NULL,NULL); to access the last computed value of your
function, from this you can update your global variable.
The new code is in https://gitlab.com/petsc/petsc/-/merge_requests/7293 and
retains the null space on the submatrices for both MatZeroRows() and
MatZeroRowsAndColumns() regardless of changes to the nonzero structure of the
matrix.
Barry
> On Feb 13, 2024, at 7:12 AM, Jeremy Theler
Thank you for the code.
A) By default MatZeroRows() does change the nonzero structure of the matrix
B) PCFIELDSPLIT loses the null spaces attached to the submatrices if the
nonzero structure of the matrix changes.
For the example code if one sets MatSetOption(A,MAT_KEEP_NONZERO_PATTERN,
7279 does change the code for MatZeroRowsColumns_MPIAIJ(). But perhaps that
does not resolve the problem you are seeing? If that is the case we will need a
reproducible example so we can determine exactly what else is happening in your
code to cause the difficulties.
Here is the diff for
error while loading shared libraries: libmpi_ibm_usempif08.so: cannot open
shared object file: No such file or directory
So using the mpif90 does not work because it links a shared library that
cannot be found at run time.
Perhaps that library is only visible on the bach nodes. You can
The bug fix for 2 is availabel in
https://gitlab.com/petsc/petsc/-/merge_requests/7279
> On Feb 9, 2024, at 10:50 AM, Barry Smith wrote:
>
>
> 1. Code going through line 692 looses the near nullspace of the matrices
> attached to the sub-KSPs
> 2. The call to
1. Code going through line 692 looses the near nullspace of the matrices
attached to the sub-KSPs
2. The call to MatZeroRowsColumns() changes then non-zero structure for MPIAIJ
but not for SEQAIJ
(unless MAT_KEEP_NONZERO_PATTERN is used)
MatZeroRowsColumns() manual page states:
Unlike
ocs before solving?
>
> Regards,
> Maruthi
>
> On Mon, Feb 5, 2024 at 2:18 AM Barry Smith <mailto:bsm...@petsc.dev>> wrote:
>>
>>Is each rank trying to create its own sequential matrix with
>> MatCreateSeqAIJWithArrays() or did you mean MatCreateMP
teJacobian routine is not guaranteed to be exactly the
vector you passed into SNESSolve() as the solution (it could potentially be
some temporary work vector). I think it is best not to rely on the vector
always being the same.
> Best regards,
>
> Yi
>
> On 2/6/24 01
> FormJacobianShell(), then the code seems to work, meaning that the base
> vector is passed to shell matrix context behind the scene.
>
> Best regards,
>
> Yi
>
> On 2/5/24 19:09, Barry Smith wrote:
>>
>> Send the entire code.
>>
>>
>>> On Feb
ed with
>
> [0]PETSC ERROR: - Error Message
> --
> [0]PETSC ERROR: Null argument, when expecting valid pointer
> [0]PETSC ERROR: Null Pointer: Parameter # 1
>
> In MyMult, I actually defined x to
Is each rank trying to create its own sequential matrix with
MatCreateSeqAIJWithArrays() or did you mean MatCreateMPIAIJWithArrays()?
If the latter, then possibly one of your size arguments is wrong or the
indices are incorrect for the given sizes.
Barry
> On Feb 4, 2024, at 3:15
The Fortran "stubs" (subroutines) should be in
$PETSC_ARCH/src/vec/is/section/interface/ftn-auto/sectionf.c and compiled and
linked into the PETSc library.
The same tool that builds the interfaces in
$PETSC_ARCH/src/vec/f90-mod/ftn-auto-interfaces/petscpetscsection.h90, also
builds the
Call MatSetOption(mat,MAT_IGNORE_ZERO_ENTRIES,PETSC_TRUE)
For each column call VecGetArray(), zero the "small entries", then call
MatSetValues() for that single column.
Barry
> On Feb 2, 2024, at 12:28 PM, TARDIEU Nicolas via petsc-users
> wrote:
>
> Dear PETSc users,
>
> I
hen in the fieldsplit
process LU can be used for A so it does not need a good preconditioner. So what
is the structure of D?
>
> On 1/31/24 18:45, Barry Smith wrote:
>>For large problems, preconditioners have to take advantage of some
>> underlying mathematical stru
For large problems, preconditioners have to take advantage of some
underlying mathematical structure of the operator to perform well (require few
iterations). Just black-boxing the system with simple preconditioners will not
be effective.
So, one needs to look at the Liouvillian
in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> ( in rhs )
> ( leave rhs )
> === start mymult ===
> === done mymult ===
> Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH it
How do I see a difference? What does "hence ruin my previous converged KSP
result" mean? A different answer at the end of the KSP solve?
$ ./joe > joe.basic
~/Src/petsc/src/ksp/ksp/tutorials (barry/2023-09-15/fix-log-pcmpi=)
arch-fix-log-pcmpi
$ ./joe -ksp_monitor -ksp_converged_reason
This is a problem with MPI programming and optimization; I am unaware of a
perfect solution.
Put the design variables into the solution vector on MPI rank 0, and when
doing your objective/gradient, send the values to all the MPI processes where
you use them. You can use a VecScatter to
Document the change in behavior for matrices with a block size greater than one
https://gitlab.com/petsc/petsc/-/merge_requests/7246
> On Jan 27, 2024, at 3:37 PM, Mark Adams wrote:
>
> Note, pc_redistibute is a great idea but you lose the block size, which is
> obvious after you realize
ferent numbers
> reported by the GPU and Total flop/s columns and why the GPU flop/s are
> always higher than the Total flop/s ?
> Or am I missing something?
>
> Thank you for your attention.
> Anthony Jourdon
>
>
>
> Le sam. 20 janv. 2024 à 02:25, Barry Smit
This could happen if the values in the vector get changed but the
PetscObjectState does not get updated. Normally this is impossible, any action
that changes a vectors values changes its state (so for example calling
VecGetArray()/VecRestoreArray() updates the state.
Are you accessing
there
is no LogEventBegin/End() for VecShift which is why it doesn't get it on line
in the -log_view).
Barry
> On Jan 19, 2024, at 3:17 PM, Barry Smith wrote:
>
>
> Junchao
>
> I run the following on the CI machine, why does this happen? With trivial
> solver options it
Junchao
I run the following on the CI machine, why does this happen? With trivial
solver options it runs ok.
bsmith@petsc-gpu-02:/scratch/bsmith/petsc/src/ksp/ksp/tutorials$ ./ex34
-da_grid_x 192 -da_grid_y 192 -da_grid_z 192 -dm_mat_type seqaijhipsparse
-dm_vec_type seqhip -ksp_max_it
Nans indicate we do not have valid computational times for these operations;
think of them as Not Available. Providing valid times for the "inner"
operations listed with Nans requires inaccurate times (higher) for the outer
operations, since extra synchronization between the CPU and GPU
Generally fieldsplit is used on problems that have a natural "split" of the
variables into two or more subsets. For example u0,v0,u1,v1,u2,v2,u3,v4 This is
often indicated in the vectors and matrices with the "blocksize" argument, 2 in
this case. DM also often provides this information.
Thanks. Same version I tried.
> On Jan 18, 2024, at 6:09 PM, Yesypenko, Anna wrote:
>
> Hi Barry,
>
> I'm using version 3.20.3. The tacc system is lonestar6.
>
> Best,
> Anna
> From: Barry Smith mailto:bsm...@petsc.dev>>
> Sent: Thursday, January 18
to try to reproduce?
Barry
> On Jan 18, 2024, at 4:38 PM, Barry Smith wrote:
>
>
>It is using the hash map system for inserting values which only inserts on
> the CPU, not on the GPU. So I don't see that it would be moving any data to
> the GPU until the mat
1 - 100 of 5194 matches
Mail list logo