Re: [petsc-users] On PCFIELDSPLIT and its implementation

2022-11-08 Thread Edoardo alinovi
Hello guys,

I am getting this error while using fieldsplit:

[3]PETSC ERROR: - Error Message
--

*[3]PETSC ERROR: Nonconforming object sizes[3]PETSC ERROR: Local column
sizes 6132 do not add up to total number of columns 9200*
[3]PETSC ERROR: See https://petsc.org/release/faq/ for trouble shooting.
[3]PETSC ERROR: Petsc Development GIT revision: v3.18.1-191-g32ed6ae2ff2
 GIT Date: 2022-11-08 12:22:17 -0500
[3]PETSC ERROR: flubio_coupled on a gnu named alienware by edo Wed Nov  9
08:16:29 2022
[3]PETSC ERROR: Configure options PETSC_ARCH=gnu FOPTFLAGS=-O3
COPTFLAGS=-O3 CXXOPTFLAGS=-O3 -with-debugging=no -download-fblaslapack=1
-download-superlu_dist -download-mumps -download-hypre -download-metis
-download-parmetis -download-scalapack -download-ml -download-slepc
-download-hpddm -download-cmake
-with-mpi-dir=/home/edo/software/openmpi-4.1.1/build/
[3]PETSC ERROR: #1 MatCreateSubMatrix_MPIBAIJ_Private() at
/home/edo/software/petsc/src/mat/impls/baij/mpi/mpibaij.c:1987
[3]PETSC ERROR: #2 MatCreateSubMatrix_MPIBAIJ() at
/home/edo/software/petsc/src/mat/impls/baij/mpi/mpibaij.c:1911
[3]PETSC ERROR: #3 MatCreateSubMatrix() at
/home/edo/software/petsc/src/mat/interface/matrix.c:8340
[3]PETSC ERROR: #4 PCSetUp_FieldSplit() at
/home/edo/software/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c:657
[3]PETSC ERROR: #5 PCSetUp() at
/home/edo/software/petsc/src/ksp/pc/interface/precon.c:994
[3]PETSC ERROR: #6 KSPSetUp() at
/home/edo/software/petsc/src/ksp/ksp/interface/itfunc.c:406
[3]PETSC ERROR: #7 KSPSolve_Private() at
/home/edo/software/petsc/src/ksp/ksp/interface/itfunc.c:825
[3]PETSC ERROR: #8 KSPSolve() at
/home/edo/software/petsc/src/ksp/ksp/interface/itfunc.c:1071

Do you have any ideas? Probably something missing in my brief
implementation here:




*call PCSetType(mypc, PCFIELDSPLIT, ierr)  call
PCFieldSplitSetBlockSize(mypc, 4-bdim, ierr)   *






















*!2D, 3x3 blockif(bdim==1) then
ufields(1) = 0ufields(2) = 1pfields(1) = 2
  call PCFieldSplitSetFields(mypc, "u", 2, ufields, ufields,
ierr)call PCFieldSplitSetFields(mypc, "p", 1, pfields,
pfields, ierr) ! 3D 4x4 blockelse
ufields(1) = 0ufields(2) = 1ufields(3) = 2
  pfields(1) = 3call
PCFieldSplitSetFields(mypc, "u", 3, ufields, ufields, ierr)
call PCFieldSplitSetFields(mypc, "p", 1, pfields, pfields, ierr)
endif ! Field split type ADDITIVE, MULTIPLICATIVE
(default), SYMMETRIC_MULTIPLICATIVE, SPECIAL, SCHURcall
PCFieldSplitSetType(mypc, PC_COMPOSITE_SCHUR, ierr)*

Thanks for the help!


[petsc-users] Reference element in DMPlexComputeCellGeometryAffineFEM

2022-11-08 Thread Blaise Bourdin
Hi,

What reference simplex is DMPlexComputeCellGeometryAffineFEM using in 2 and 3D?
I am used to computing my shape functions on the unit simplex (vertices at the 
origin and each e_i), but it does not look to be the reference simplex in this 
function:

In 3D, for the unit simplex with vertices at (0,0,0) (1,0,0) (0,1,0) (0,0,1) 
(in this order), I get J = 1 / 2 . [[-1,-1,-1],[1,0,0],[0,0,1]] and v0 = [0,0,1]

In 2D, for the unit simplex with vertices at (0,0), (1,0), and (0,1), I get J = 
1 / 2. I and v0 = [0,0], which does not make any sense to me (I was assuming 
that the 2D reference simplex had vertices at (-1,-1), (1, -1) and (-1,1), but 
if this were the case, v0 would not be 0).

I can build a simple example with meshes consisting only of the unit simplex in 
2D and 3D if that would help.

Regards,
Blaise



— 
Canada Research Chair in Mathematical and Computational Aspects of Solid 
Mechanics (Tier 1)
Professor, Department of Mathematics & Statistics
Hamilton Hall room 409A, McMaster University
1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243



Re: [petsc-users] Local columns of A10 do not equal local rows of A00

2022-11-08 Thread Alexander Lindsay
This is from our DMCreateFieldDecomposition_Moose routine. The IS size on
process 1 (which is the process from which I took the error in the original
post) is reported as 4129 which is consistent with the row size of A00.

Split '0' has local size 4129 on processor 1
Split '0' has local size 4484 on processor 6
Split '0' has local size 4471 on processor 12
Split '0' has local size 4040 on processor 14
Split '0' has local size 3594 on processor 20
Split '0' has local size 4423 on processor 22
Split '0' has local size 2791 on processor 27
Split '0' has local size 3014 on processor 29
Split '0' has local size 3183 on processor 30
Split '0' has local size 3328 on processor 3
Split '0' has local size 4689 on processor 4
Split '0' has local size 8016 on processor 8
Split '0' has local size 6367 on processor 10
Split '0' has local size 5973 on processor 17
Split '0' has local size 4431 on processor 18
Split '0' has local size 7564 on processor 25
Split '0' has local size 12504 on processor 9
Split '0' has local size 10081 on processor 11
Split '0' has local size 13808 on processor 24
Split '0' has local size 14049 on processor 31
Split '0' has local size 15324 on processor 7
Split '0' has local size 15337 on processor 15
Split '0' has local size 14849 on processor 19
Split '0' has local size 15660 on processor 23
Split '0' has local size 14728 on processor 26
Split '0' has local size 15724 on processor 28
Split '0' has local size 17249 on processor 5
Split '0' has local size 15519 on processor 13
Split '0' has local size 16511 on processor 16
Split '0' has local size 16496 on processor 21
Split '0' has local size 18291 on processor 2
Split '0' has local size 18042 on processor 0

On Mon, Nov 7, 2022 at 6:04 PM Matthew Knepley  wrote:

> On Mon, Nov 7, 2022 at 5:48 PM Alexander Lindsay 
> wrote:
>
>> My understanding looking at PCFieldSplitSetDefaults is that our
>> implementation of `createfielddecomposition` should get called, we'll set
>> `fields` and then (ignoring possible user setting of
>> -pc_fieldsplit_%D_fields flag) PCFieldSplitSetIS will get called with
>> whatever we did to `fields`. So yea I guess that just looking over that I
>> would assume we're not supplying two different index sets for rows and
>> columns, or put more precisely we (MOOSE) are not really afforded the
>> opportunity to. But my interpretation could very well be wrong.
>>
>
> Oh wait. I read the error message again. It does not say that the whole
> selection is rectangular. It says
>
>   Local columns of A10 4137 do not equal local rows of A00 4129
>
> So this is a parallel partitioning thing. Since A00 has 4129 local rows,
> it should have this many columns as well.
> However A10 has 4137 local columns. How big is IS_0, on each process, that
> you pass in to PCFIELDSPLIT?
>
>   Thanks,
>
>  Matt
>
>
>> On Mon, Nov 7, 2022 at 12:33 PM Matthew Knepley 
>> wrote:
>>
>>> On Mon, Nov 7, 2022 at 2:09 PM Alexander Lindsay <
>>> alexlindsay...@gmail.com> wrote:
>>>
 The libMesh/MOOSE specific code that identifies dof indices for
 ISCreateGeneral is in DMooseGetEmbedding_Private. I can share that function
 (it's quite long) or more details if that could be helpful.

>>>
>>> Sorry, I should have written more. The puzzling thing for me is that
>>> somehow it looks like the row and column index sets are not the same. I did
>>> not think
>>> PCFIELDSPLIT could do that. The PCFieldSplitSetIS() interface does not
>>> allow it. I was wondering how you were setting the ISes.
>>>
>>>   Thanks,
>>>
>>>  Matt
>>>
>>>
 On Mon, Nov 7, 2022 at 10:55 AM Alexander Lindsay <
 alexlindsay...@gmail.com> wrote:

> I'm not sure exactly what you mean, but I'll try to give more details.
> We have our own DM class (DM_Moose) and we set our own field and domain
> decomposition routines:
>
>   dm->ops->createfielddecomposition =
> DMCreateFieldDecomposition_Moose;
>
>   dm->ops->createdomaindecomposition =
> DMCreateDomainDecomposition_Moose;
>
>
> The field and domain decomposition routines are as follows (can see
> also at
> https://github.com/idaholab/moose/blob/next/framework/src/utils/PetscDMMoose.C
> ):
>
> static PetscErrorCode
> DMCreateFieldDecomposition_Moose(
> DM dm, PetscInt * len, char *** namelist, IS ** islist, DM **
> dmlist)
> {
>   PetscErrorCode ierr;
>   DM_Moose * dmm = (DM_Moose *)(dm->data);
>
>   PetscFunctionBegin;
>   /* Only called after DMSetUp(). */
>   if (!dmm->_splitlocs)
> PetscFunctionReturn(0);
>   *len = dmm->_splitlocs->size();
>   if (namelist)
>   {
> ierr = PetscMalloc(*len * sizeof(char *), namelist);
> CHKERRQ(ierr);
>   }
>   if (islist)
>   {
> ierr = PetscMalloc(*len * sizeof(IS), islist);
> CHKERRQ(ierr);
>   }
>   if (dmlist)
>   {
> ierr = PetscMalloc(*len * sizeof(DM), dmlist);
> 

Re: [petsc-users] TSBEULER vs TSPSEUDO

2022-11-08 Thread Jed Brown
Francesc Levrero-Florencio  writes:

> Hi Jed,
>
> Thanks for the answer.
>
> We do have a monolithic arc-length implementation based on the TS/SNES logic, 
> but we are also exploring having a custom SNESSHELL because the arc-length 
> logic is substantially more complex than that of traditional load-controlled 
> continuation methods. It works quite well, the only "issue" is its 
> initiation; we are currently performing load-control (or displacement loading 
> as you mentioned) in the first time increment. Besides load-control and 
> arc-length control, what other continuation methods would you suggest 
> exploring?

Those are the main ones, and they're all simple expressions for the constraint 
condition that you have to handle for arc-length methods, thus suitable to make 
extensible. Wriggers' book has a nice discussion and table. I imagine we'll get 
some more experience with the tradeoffs after I add it to SNES.

> The test problem we are dealing with assumes plasticity but with small 
> strains so we will not see any snap-throughs, snap-backs or similar. TSBEULER 
> works quite well for this specific case and converges in a few time steps 
> within around 5-10 SNES iterations per time step. What PETSc functions do you 
> suggest exploring for implementing the TS time step extension control you 
> mentioned?

Check out src/ts/adapt/impls/ for the current implementations.

> Since you mentioned -ts_theta_initial_guess_extrapolate, is it worth using it 
> in highly nonlinear mechanical problems (such as plasticity)? It sounds quite 
> useful if it consistently reduces SNES iterations by one per time step, as 
> each linear solve is quite expensive for large problems.

I found sometimes it overshoots and thus causes problems, so effectiveness was 
problem-dependent. It's just a run-time flag so check it out.

I'm curious if you have experience using BFGS with Jacobian scaling (either a 
V-cycle or a sparse direct solve) instead of Newton. You can try it using 
-snes_type qn -snes_qn_scale_type jacobian. This can greatly reduce the number 
of assemblies and preconditioner setups, and we find it also reduces total 
number of V-cycles so is effective even with our matrix-free p-MG (which are 
very fast and have much lower setup costs, https://arxiv.org/abs/2204.01722).


Re: [petsc-users] TSBEULER vs TSPSEUDO

2022-11-08 Thread Francesc Levrero-Florencio
Hi Jed,

Thanks for the answer.

We do have a monolithic arc-length implementation based on the TS/SNES logic, 
but we are also exploring having a custom SNESSHELL because the arc-length 
logic is substantially more complex than that of traditional load-controlled 
continuation methods. It works quite well, the only "issue" is its initiation; 
we are currently performing load-control (or displacement loading as you 
mentioned) in the first time increment. Besides load-control and arc-length 
control, what other continuation methods would you suggest exploring?

The test problem we are dealing with assumes plasticity but with small strains 
so we will not see any snap-throughs, snap-backs or similar. TSBEULER works 
quite well for this specific case and converges in a few time steps within 
around 5-10 SNES iterations per time step. What PETSc functions do you suggest 
exploring for implementing the TS time step extension control you mentioned?

Since you mentioned -ts_theta_initial_guess_extrapolate, is it worth using it 
in highly nonlinear mechanical problems (such as plasticity)? It sounds quite 
useful if it consistently reduces SNES iterations by one per time step, as each 
linear solve is quite expensive for large problems.

Regards,
Francesc.

From: Jed Brown 
Sent: 08 November 2022 17:09
To: Francesc Levrero-Florencio ; 
petsc-users@mcs.anl.gov 
Subject: Re: [petsc-users] TSBEULER vs TSPSEUDO

[External Sender]

First, I believe arc-length continuation is the right approach in this problem 
domain. I have a branch starting an implementation, but need to revisit it in 
light of some feedback (and time has been too short lately).

My group's nonlinear mechanics solver uses TSBEULER because it's convenient to 
parametrize loading on T=[0,1]. Unlike arc-length continuation, this can't 
handle snap-through effects. TSPSEUDO is the usual recommendation if you don't 
care about time accuracy, though you could register a custom controller for 
normal TS methods that implements any logic you'd like around automatically 
extending the time step without using a truncation error estimate.

Note that displacement loading (as usually implemented) is really bad 
(especially for models with plasticity) because increments that are large 
relative to the mesh size can invert elements or initiate plastic yielding when 
that would not happen if using smaller increments. Arc-length continuation also 
helps fix that problem.

Note that you can use extrapolation (-ts_theta_initial_guess_extrapolate), 
though I've found this to be somewhat brittle and only reduce SNES iteration 
count by about 1 per time step.

Francesc Levrero-Florencio  writes:

> Hi PETSc people,
>
> We are running highly nonlinear quasi-static (steady-state) mechanical finite 
> element problems with PETSc, currently using TSBEULER and the basic time 
> adapt scheme.
>
> What we do in order to tackle these nonlinear problems is to parametrize the 
> applied loads with the time in the TS and apply them incrementally. While 
> this usually works well, we have seen instances in which the adaptor would 
> reject the time step according to the calculated truncation errors, even if 
> the SNES converges in a small number of iterations. Another issue that we 
> have recently observed is that in a sequence of converged time steps the 
> adaptor decides to start cutting the time step to smaller and smaller values 
> using the low clip default value of TSAdaptGetClip (again because the 
> truncation errors are high enough). What can we do in order to avoid these 
> issues? The first one is avoided by using TSAdaptSetAlwaysAccept, but the 
> latter remains. We have tried setting the low clip value to its maximum 
> accepted value of 1, but then the time increment does not increase even if 
> the SNES always converges in 3 or 4 iterations. Maybe a solution is to 
> increase the tolerances of the TSAdapt?
>
> Another potential solution we have recently tried in order to tackle these 
> issues is using TSPSEUDO (and deparametrizing the applied loads), but 
> generally find that it takes a much longer time to reach an acceptable 
> solution compared with TSBEULER. We have mostly used the default KSPONLY 
> option, but we'd like to explore TSPSEUDO with NEWTONLS. A first question 
> would be: what happens if the SNES fails to converge, does the solution get 
> updated somehow in the corresponding time step? We have performed a few tests 
> with TSPSEUDO and NEWTONLS, setting the maximum number of SNES iterations to 
> a relatively low number (e.g. 5), and then always setting the SNES as 
> converged in the poststage function, and found that it performs reasonably 
> well, at least better than with the default KSPONLY (does this make any 
> sense?).
>
> Thanks a lot!
>
> Regards,
> Francesc.


Re: [petsc-users] On PCFIELDSPLIT and its implementation

2022-11-08 Thread Matthew Knepley
On Tue, Nov 8, 2022 at 12:05 PM Edoardo alinovi 
wrote:

> Hello Guys,
>
> Thanks to your suggestions on the block matrices, my fully coupled solver
> is proceeding very well!
>
> I am now about to take advantage of the block structure of the matrix
> using PCFIELDSPLIT. I have learned a bit from the user manual and followed
> with interest this discussion in the mailing list:
> https://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024154.html
> which is actually the exact same situation I am in, so I guess most of the
> command line options will be copy and paste.
>
> I would like however to code them in fortran, as I usually provide some
> default implementation alongside the command line options.
>
> While coding some of the options I got an error here
> in PCFieldSplitSetFields() which looks to be undefined. I am importing
> petscksp, do I need to import something else maybe?
>

Since it uses arrays, we will have to write the Fortran wrapper by hand. I
will see if I can do it soon.

  Thanks,

Matt


> Thank you!
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ 


Re: [petsc-users] TSBEULER vs TSPSEUDO

2022-11-08 Thread Jed Brown
First, I believe arc-length continuation is the right approach in this problem 
domain. I have a branch starting an implementation, but need to revisit it in 
light of some feedback (and time has been too short lately).

My group's nonlinear mechanics solver uses TSBEULER because it's convenient to 
parametrize loading on T=[0,1]. Unlike arc-length continuation, this can't 
handle snap-through effects. TSPSEUDO is the usual recommendation if you don't 
care about time accuracy, though you could register a custom controller for 
normal TS methods that implements any logic you'd like around automatically 
extending the time step without using a truncation error estimate.

Note that displacement loading (as usually implemented) is really bad 
(especially for models with plasticity) because increments that are large 
relative to the mesh size can invert elements or initiate plastic yielding when 
that would not happen if using smaller increments. Arc-length continuation also 
helps fix that problem.

Note that you can use extrapolation (-ts_theta_initial_guess_extrapolate), 
though I've found this to be somewhat brittle and only reduce SNES iteration 
count by about 1 per time step.

Francesc Levrero-Florencio  writes:

> Hi PETSc people,
>
> We are running highly nonlinear quasi-static (steady-state) mechanical finite 
> element problems with PETSc, currently using TSBEULER and the basic time 
> adapt scheme.
>
> What we do in order to tackle these nonlinear problems is to parametrize the 
> applied loads with the time in the TS and apply them incrementally. While 
> this usually works well, we have seen instances in which the adaptor would 
> reject the time step according to the calculated truncation errors, even if 
> the SNES converges in a small number of iterations. Another issue that we 
> have recently observed is that in a sequence of converged time steps the 
> adaptor decides to start cutting the time step to smaller and smaller values 
> using the low clip default value of TSAdaptGetClip (again because the 
> truncation errors are high enough). What can we do in order to avoid these 
> issues? The first one is avoided by using TSAdaptSetAlwaysAccept, but the 
> latter remains. We have tried setting the low clip value to its maximum 
> accepted value of 1, but then the time increment does not increase even if 
> the SNES always converges in 3 or 4 iterations. Maybe a solution is to 
> increase the tolerances of the TSAdapt?
>
> Another potential solution we have recently tried in order to tackle these 
> issues is using TSPSEUDO (and deparametrizing the applied loads), but 
> generally find that it takes a much longer time to reach an acceptable 
> solution compared with TSBEULER. We have mostly used the default KSPONLY 
> option, but we'd like to explore TSPSEUDO with NEWTONLS. A first question 
> would be: what happens if the SNES fails to converge, does the solution get 
> updated somehow in the corresponding time step? We have performed a few tests 
> with TSPSEUDO and NEWTONLS, setting the maximum number of SNES iterations to 
> a relatively low number (e.g. 5), and then always setting the SNES as 
> converged in the poststage function, and found that it performs reasonably 
> well, at least better than with the default KSPONLY (does this make any 
> sense?).
>
> Thanks a lot!
>
> Regards,
> Francesc.


[petsc-users] On PCFIELDSPLIT and its implementation

2022-11-08 Thread Edoardo alinovi
Hello Guys,

Thanks to your suggestions on the block matrices, my fully coupled solver
is proceeding very well!

I am now about to take advantage of the block structure of the matrix
using PCFIELDSPLIT. I have learned a bit from the user manual and followed
with interest this discussion in the mailing list:
https://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024154.html
which is actually the exact same situation I am in, so I guess most of the
command line options will be copy and paste.

I would like however to code them in fortran, as I usually provide some
default implementation alongside the command line options.

While coding some of the options I got an error here
in PCFieldSplitSetFields() which looks to be undefined. I am importing
petscksp, do I need to import something else maybe?

Thank you!


Re: [petsc-users] [petsc-maint] Issues linking petsc header files and lib from FORTRAN codes

2022-11-08 Thread Jianbo Long
Here are the ldd outputs:
>> ldd petsc_3.18_gnu/arch-linux-c-debug/lib/libpetsc.so
linux-vdso.so.1 =>  (0x7f23e5ff2000)
libflexiblas.so.3 =>
/cluster/software/FlexiBLAS/3.0.4-GCC-11.2.0/lib/libflexiblas.so.3
(0x7f23e1b6)
libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7f23e1944000)
libm.so.6 => /usr/lib64/libm.so.6 (0x7f23e1642000)
libdl.so.2 => /usr/lib64/libdl.so.2 (0x7f23e143e000)
libmpi_usempif08.so.40 =>
/cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempif08.so.40
(0x7f23e5fb)
libmpi_usempi_ignore_tkr.so.40 =>
/cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempi_ignore_tkr.so.40
(0x7f23e5fa2000)
libmpi_mpifh.so.40 =>
/cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_mpifh.so.40
(0x7f23e5f2a000)
libmpi.so.40 => /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi.so.40
(0x7f23e5e18000)
libgfortran.so.5 => /cluster/software/GCCcore/11.2.0/lib64/libgfortran.so.5
(0x7f23e1191000)
libgcc_s.so.1 => /cluster/software/GCCcore/11.2.0/lib64/libgcc_s.so.1
(0x7f23e5dfe000)
libquadmath.so.0 => /cluster/software/GCCcore/11.2.0/lib64/libquadmath.so.0
(0x7f23e1149000)
libc.so.6 => /usr/lib64/libc.so.6 (0x7f23e0d7b000)
/lib64/ld-linux-x86-64.so.2 (0x7f23e5dd3000)
libopen-rte.so.40 =>
/cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-rte.so.40
(0x7f23e0cbf000)
libopen-orted-mpir.so =>
/cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-orted-mpir.so
(0x7f23e5df9000)
libopen-pal.so.40 =>
/cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-pal.so.40
(0x7f23e0c0b000)
librt.so.1 => /lib64/librt.so.1 (0x7f23e09f6000)
libutil.so.1 => /lib64/libutil.so.1 (0x7f23e07f3000)
libhwloc.so.15 =>
/cluster/software/hwloc/2.5.0-GCCcore-11.2.0/lib/libhwloc.so.15
(0x7f23e0796000)
libpciaccess.so.0 =>
/cluster/software/libpciaccess/0.16-GCCcore-11.2.0/lib/libpciaccess.so.0
(0x7f23e078b000)
libxml2.so.2 =>
/cluster/software/libxml2/2.9.10-GCCcore-11.2.0/lib/libxml2.so.2
(0x7f23e0617000)
libz.so.1 => /cluster/software/zlib/1.2.11-GCCcore-11.2.0/lib/libz.so.1
(0x7f23e05fe000)
liblzma.so.5 => /cluster/software/XZ/5.2.5-GCCcore-11.2.0/lib/liblzma.so.5
(0x7f23e05d6000)
libevent_core-2.0.so.5 => /lib64/libevent_core-2.0.so.5 (0x7f23e03ab000)
libevent_pthreads-2.0.so.5 => /lib64/libevent_pthreads-2.0.so.5
(0x7f23e01a8000)

And /cluster/software/GCCcore/11.2.0 is pretty recent (around 2020/2021).
You can see that I am using openmpi. Now I am trying compiling petsc
without MPI.


On Tue, Nov 8, 2022 at 4:43 PM Satish Balay  wrote:

> On Tue, 8 Nov 2022, Satish Balay via petsc-users wrote:
>
> > You don't see 'libstdc++' in the output from 'ldd libptsc.so' below - so
> there is no reference
> > to libstdc++ from petsc
> >
> > Try a clean build of PETSc and see if you still have these issues.
> >
> > ./configure --with-cc=gcc --with-cxx=0 --with-fc=gfortran
> --download-fblaslapack --download-mpich
>
> Perhaps good to also add: --with-hwloc=0
>
> Satish
>
> >
> > Another way to avoid this issue is to use /usr/bin/gcc, gfortran - i.e
> avoid using tools from /cluster/software/GCCcore
> > Are they super old versions - that are not suitable?
> >
> > Satish
> >
> >
> >
> > On Tue, 8 Nov 2022, Jianbo Long wrote:
> >
> > > I am suspecting something else as well ...
> > >
> > > Could you elaborate more about "mixing c++ codes compiled with
> /usr/bin/g++
> > > and compilers in /cluster/software/GCCcore/11.2.0" ? My own Fortran
> code
> > > does not have any c++ codes, and for some reason, the compiled petsc
> > > library is dependent on this libstdc++.so.6. I am sure about this
> because
> > > without linking the petsc, I don't have this libstdc++ trouble.
> > >
> > > Thanks,
> > > Jianbo
> > >
> > > On Mon, Nov 7, 2022 at 7:10 PM Satish Balay  wrote:
> > >
> > > > Likely due to mixing c++ codes compiled with /usr/bin/g++ and
> compilers in
> > > > /cluster/software/GCCcore/11.2.0
> > > >
> > > > if you still get this with --with-cxx=0 - then the issue with some
> other
> > > > [non-petsc library]
> > > >
> > > > Satish
> > > >
> > > > On Mon, 7 Nov 2022, Jianbo Long wrote:
> > > >
> > > > > Hi Satish,
> > > > >
> > > > > I wonder if you know anything about another issue: after compiling
> petsc
> > > > on
> > > > > a cluster, when I tried to link my Fortran code with compiled
> > > > libpetsc.so,
> > > > > the shared library, I got the following errors:
> > > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > > /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required
> by
> > > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found
> (required by
> > > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > > /lib64/libstdc++.so.6: version 

Re: [petsc-users] [petsc-maint] Issues linking petsc header files and lib from FORTRAN codes

2022-11-08 Thread Satish Balay via petsc-users
On Tue, 8 Nov 2022, Satish Balay via petsc-users wrote:

> You don't see 'libstdc++' in the output from 'ldd libptsc.so' below - so 
> there is no reference
> to libstdc++ from petsc
> 
> Try a clean build of PETSc and see if you still have these issues.
> 
> ./configure --with-cc=gcc --with-cxx=0 --with-fc=gfortran 
> --download-fblaslapack --download-mpich

Perhaps good to also add: --with-hwloc=0

Satish

> 
> Another way to avoid this issue is to use /usr/bin/gcc, gfortran - i.e avoid 
> using tools from /cluster/software/GCCcore
> Are they super old versions - that are not suitable?
> 
> Satish
> 
> 
> 
> On Tue, 8 Nov 2022, Jianbo Long wrote:
> 
> > I am suspecting something else as well ...
> > 
> > Could you elaborate more about "mixing c++ codes compiled with /usr/bin/g++
> > and compilers in /cluster/software/GCCcore/11.2.0" ? My own Fortran code
> > does not have any c++ codes, and for some reason, the compiled petsc
> > library is dependent on this libstdc++.so.6. I am sure about this because
> > without linking the petsc, I don't have this libstdc++ trouble.
> > 
> > Thanks,
> > Jianbo
> > 
> > On Mon, Nov 7, 2022 at 7:10 PM Satish Balay  wrote:
> > 
> > > Likely due to mixing c++ codes compiled with /usr/bin/g++ and compilers in
> > > /cluster/software/GCCcore/11.2.0
> > >
> > > if you still get this with --with-cxx=0 - then the issue with some other
> > > [non-petsc library]
> > >
> > > Satish
> > >
> > > On Mon, 7 Nov 2022, Jianbo Long wrote:
> > >
> > > > Hi Satish,
> > > >
> > > > I wonder if you know anything about another issue: after compiling petsc
> > > on
> > > > a cluster, when I tried to link my Fortran code with compiled
> > > libpetsc.so,
> > > > the shared library, I got the following errors:
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > > /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by
> > > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > >
> > > > Not sure if it is related to discussion in this post (
> > > > https://gitlab.com/petsc/petsc/-/issues/997), but after I tried the
> > > > configure option --with-cxx=0, I still got the same errors.
> > > > My make.log file for compiling petsc is attached here. Also, the
> > > > dependencies of the compiled petsc is:
> > > >
> > > > >>: ldd arch-linux-c-debug/lib/libpetsc.so
> > > > linux-vdso.so.1 =>  (0x7ffd80348000)
> > > > libflexiblas.so.3 =>
> > > > /cluster/software/FlexiBLAS/3.0.4-GCC-11.2.0/lib/libflexiblas.so.3
> > > > (0x7f6e8b93f000)
> > > > libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7f6e8b723000)
> > > > libm.so.6 => /usr/lib64/libm.so.6 (0x7f6e8b421000)
> > > > libdl.so.2 => /usr/lib64/libdl.so.2 (0x7f6e8b21d000)
> > > > libmpi_usempif08.so.40 =>
> > > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempif08.so.40
> > > > (0x7f6e8fd92000)
> > > > libmpi_usempi_ignore_tkr.so.40 =>
> > > >
> > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempi_ignore_tkr.so.40
> > > > (0x7f6e8fd84000)
> > > > libmpi_mpifh.so.40 =>
> > > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_mpifh.so.40
> > > > (0x7f6e8fd0c000)
> > > > libmpi.so.40 =>
> > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi.so.40
> > > > (0x7f6e8fbfa000)
> > > > libgfortran.so.5 =>
> > > /cluster/software/GCCcore/11.2.0/lib64/libgfortran.so.5
> > > > (0x7f6e8af7)
> > > > libgcc_s.so.1 => /cluster/software/GCCcore/11.2.0/lib64/libgcc_s.so.1
> > > > (0x7f6e8fbe)
> > > > libquadmath.so.0 =>
> > > /cluster/software/GCCcore/11.2.0/lib64/libquadmath.so.0
> > > > (0x7f6e8af28000)
> > > > libc.so.6 => /usr/lib64/libc.so.6 (0x7f6e8ab5a000)
> > > > /lib64/ld-linux-x86-64.so.2 (0x7f6e8fbb3000)
> > > > libopen-rte.so.40 =>
> > > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-rte.so.40
> > > > (0x7f6e8aa9e000)
> > > > libopen-orted-mpir.so =>
> > > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-orted-mpir.so
> > > > (0x7f6e8fbdb000)
> > > > libopen-pal.so.40 =>
> > > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-pal.so.40
> > 

Re: [petsc-users] [petsc-maint] Issues linking petsc header files and lib from FORTRAN codes

2022-11-08 Thread Matthew Knepley
On Tue, Nov 8, 2022 at 10:28 AM Jianbo Long  wrote:

> I am suspecting something else as well ...
>
> Could you elaborate more about "mixing c++ codes compiled with
> /usr/bin/g++ and compilers in /cluster/software/GCCcore/11.2.0" ? My own
> Fortran code does not have any c++ codes, and for some reason, the compiled
> petsc library is dependent on this libstdc++.so.6. I am sure about this
> because without linking the petsc, I don't have this libstdc++ trouble.
>

Are you sure it is not MPI that is bringing in C++? With --with-cxx=0,
there should be no C++ in PETSc. However, we can test this.
Can you

  ldd ${PETSC_ARCH}/lib/libpetsc.so

  Thanks,

Matt


> Thanks,
> Jianbo
>
> On Mon, Nov 7, 2022 at 7:10 PM Satish Balay  wrote:
>
>> Likely due to mixing c++ codes compiled with /usr/bin/g++ and compilers
>> in /cluster/software/GCCcore/11.2.0
>>
>> if you still get this with --with-cxx=0 - then the issue with some other
>> [non-petsc library]
>>
>> Satish
>>
>> On Mon, 7 Nov 2022, Jianbo Long wrote:
>>
>> > Hi Satish,
>> >
>> > I wonder if you know anything about another issue: after compiling
>> petsc on
>> > a cluster, when I tried to link my Fortran code with compiled
>> libpetsc.so,
>> > the shared library, I got the following errors:
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
>> > /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
>> > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
>> > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
>> > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
>> > /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by
>> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
>> >
>> > Not sure if it is related to discussion in this post (
>> > https://gitlab.com/petsc/petsc/-/issues/997), but after I tried the
>> > configure option --with-cxx=0, I still got the same errors.
>> > My make.log file for compiling petsc is attached here. Also, the
>> > dependencies of the compiled petsc is:
>> >
>> > >>: ldd arch-linux-c-debug/lib/libpetsc.so
>> > linux-vdso.so.1 =>  (0x7ffd80348000)
>> > libflexiblas.so.3 =>
>> > /cluster/software/FlexiBLAS/3.0.4-GCC-11.2.0/lib/libflexiblas.so.3
>> > (0x7f6e8b93f000)
>> > libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7f6e8b723000)
>> > libm.so.6 => /usr/lib64/libm.so.6 (0x7f6e8b421000)
>> > libdl.so.2 => /usr/lib64/libdl.so.2 (0x7f6e8b21d000)
>> > libmpi_usempif08.so.40 =>
>> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempif08.so.40
>> > (0x7f6e8fd92000)
>> > libmpi_usempi_ignore_tkr.so.40 =>
>> >
>> /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempi_ignore_tkr.so.40
>> > (0x7f6e8fd84000)
>> > libmpi_mpifh.so.40 =>
>> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_mpifh.so.40
>> > (0x7f6e8fd0c000)
>> > libmpi.so.40 =>
>> /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi.so.40
>> > (0x7f6e8fbfa000)
>> > libgfortran.so.5 =>
>> /cluster/software/GCCcore/11.2.0/lib64/libgfortran.so.5
>> > (0x7f6e8af7)
>> > libgcc_s.so.1 => /cluster/software/GCCcore/11.2.0/lib64/libgcc_s.so.1
>> > (0x7f6e8fbe)
>> > libquadmath.so.0 =>
>> /cluster/software/GCCcore/11.2.0/lib64/libquadmath.so.0
>> > (0x7f6e8af28000)
>> > libc.so.6 => /usr/lib64/libc.so.6 (0x7f6e8ab5a000)
>> > /lib64/ld-linux-x86-64.so.2 (0x7f6e8fbb3000)
>> > libopen-rte.so.40 =>
>> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-rte.so.40
>> > (0x7f6e8aa9e000)
>> > libopen-orted-mpir.so =>
>> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-orted-mpir.so
>> > (0x7f6e8fbdb000)
>> > libopen-pal.so.40 =>
>> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-pal.so.40
>> > (0x7f6e8a9ea000)
>> > librt.so.1 => /lib64/librt.so.1 (0x7f6e8a7d5000)
>> > libutil.so.1 => /lib64/libutil.so.1 (0x7f6e8a5d2000)
>> > libhwloc.so.15 =>
>> > /cluster/software/hwloc/2.5.0-GCCcore-11.2.0/lib/libhwloc.so.15
>> > (0x7f6e8a575000)
>> > libpciaccess.so.0 =>
>> > /cluster/software/libpciaccess/0.16-GCCcore-11.2.0/lib/libpciaccess.so.0
>> > (0x7f6e8a56a000)
>> > libxml2.so.2 =>
>> > /cluster/software/libxml2/2.9.10-GCCcore-11.2.0/lib/libxml2.so.2
>> > (0x7f6e8a3f6000)
>> > libz.so.1 => /cluster/software/zlib/1.2.11-GCCcore-11.2.0/lib/libz.so.1
>> > (0x7f6e8a3dd000)
>> > liblzma.so.5 =>
>> 

Re: [petsc-users] [petsc-maint] Issues linking petsc header files and lib from FORTRAN codes

2022-11-08 Thread Satish Balay via petsc-users
You don't see 'libstdc++' in the output from 'ldd libptsc.so' below - so there 
is no reference
to libstdc++ from petsc

Try a clean build of PETSc and see if you still have these issues.

./configure --with-cc=gcc --with-cxx=0 --with-fc=gfortran 
--download-fblaslapack --download-mpich

Another way to avoid this issue is to use /usr/bin/gcc, gfortran - i.e avoid 
using tools from /cluster/software/GCCcore
Are they super old versions - that are not suitable?

Satish



On Tue, 8 Nov 2022, Jianbo Long wrote:

> I am suspecting something else as well ...
> 
> Could you elaborate more about "mixing c++ codes compiled with /usr/bin/g++
> and compilers in /cluster/software/GCCcore/11.2.0" ? My own Fortran code
> does not have any c++ codes, and for some reason, the compiled petsc
> library is dependent on this libstdc++.so.6. I am sure about this because
> without linking the petsc, I don't have this libstdc++ trouble.
> 
> Thanks,
> Jianbo
> 
> On Mon, Nov 7, 2022 at 7:10 PM Satish Balay  wrote:
> 
> > Likely due to mixing c++ codes compiled with /usr/bin/g++ and compilers in
> > /cluster/software/GCCcore/11.2.0
> >
> > if you still get this with --with-cxx=0 - then the issue with some other
> > [non-petsc library]
> >
> > Satish
> >
> > On Mon, 7 Nov 2022, Jianbo Long wrote:
> >
> > > Hi Satish,
> > >
> > > I wonder if you know anything about another issue: after compiling petsc
> > on
> > > a cluster, when I tried to link my Fortran code with compiled
> > libpetsc.so,
> > > the shared library, I got the following errors:
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > > /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by
> > > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > >
> > > Not sure if it is related to discussion in this post (
> > > https://gitlab.com/petsc/petsc/-/issues/997), but after I tried the
> > > configure option --with-cxx=0, I still got the same errors.
> > > My make.log file for compiling petsc is attached here. Also, the
> > > dependencies of the compiled petsc is:
> > >
> > > >>: ldd arch-linux-c-debug/lib/libpetsc.so
> > > linux-vdso.so.1 =>  (0x7ffd80348000)
> > > libflexiblas.so.3 =>
> > > /cluster/software/FlexiBLAS/3.0.4-GCC-11.2.0/lib/libflexiblas.so.3
> > > (0x7f6e8b93f000)
> > > libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7f6e8b723000)
> > > libm.so.6 => /usr/lib64/libm.so.6 (0x7f6e8b421000)
> > > libdl.so.2 => /usr/lib64/libdl.so.2 (0x7f6e8b21d000)
> > > libmpi_usempif08.so.40 =>
> > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempif08.so.40
> > > (0x7f6e8fd92000)
> > > libmpi_usempi_ignore_tkr.so.40 =>
> > >
> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempi_ignore_tkr.so.40
> > > (0x7f6e8fd84000)
> > > libmpi_mpifh.so.40 =>
> > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_mpifh.so.40
> > > (0x7f6e8fd0c000)
> > > libmpi.so.40 =>
> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi.so.40
> > > (0x7f6e8fbfa000)
> > > libgfortran.so.5 =>
> > /cluster/software/GCCcore/11.2.0/lib64/libgfortran.so.5
> > > (0x7f6e8af7)
> > > libgcc_s.so.1 => /cluster/software/GCCcore/11.2.0/lib64/libgcc_s.so.1
> > > (0x7f6e8fbe)
> > > libquadmath.so.0 =>
> > /cluster/software/GCCcore/11.2.0/lib64/libquadmath.so.0
> > > (0x7f6e8af28000)
> > > libc.so.6 => /usr/lib64/libc.so.6 (0x7f6e8ab5a000)
> > > /lib64/ld-linux-x86-64.so.2 (0x7f6e8fbb3000)
> > > libopen-rte.so.40 =>
> > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-rte.so.40
> > > (0x7f6e8aa9e000)
> > > libopen-orted-mpir.so =>
> > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-orted-mpir.so
> > > (0x7f6e8fbdb000)
> > > libopen-pal.so.40 =>
> > > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-pal.so.40
> > > (0x7f6e8a9ea000)
> > > librt.so.1 => /lib64/librt.so.1 (0x7f6e8a7d5000)
> > > libutil.so.1 => /lib64/libutil.so.1 (0x7f6e8a5d2000)
> > > libhwloc.so.15 =>
> > > /cluster/software/hwloc/2.5.0-GCCcore-11.2.0/lib/libhwloc.so.15
> > > (0x7f6e8a575000)
> > > libpciaccess.so.0 =>
> > > 

Re: [petsc-users] [petsc-maint] Issues linking petsc header files and lib from FORTRAN codes

2022-11-08 Thread Jianbo Long
I am suspecting something else as well ...

Could you elaborate more about "mixing c++ codes compiled with /usr/bin/g++
and compilers in /cluster/software/GCCcore/11.2.0" ? My own Fortran code
does not have any c++ codes, and for some reason, the compiled petsc
library is dependent on this libstdc++.so.6. I am sure about this because
without linking the petsc, I don't have this libstdc++ trouble.

Thanks,
Jianbo

On Mon, Nov 7, 2022 at 7:10 PM Satish Balay  wrote:

> Likely due to mixing c++ codes compiled with /usr/bin/g++ and compilers in
> /cluster/software/GCCcore/11.2.0
>
> if you still get this with --with-cxx=0 - then the issue with some other
> [non-petsc library]
>
> Satish
>
> On Mon, 7 Nov 2022, Jianbo Long wrote:
>
> > Hi Satish,
> >
> > I wonder if you know anything about another issue: after compiling petsc
> on
> > a cluster, when I tried to link my Fortran code with compiled
> libpetsc.so,
> > the shared library, I got the following errors:
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.29' not found (required by
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > /lib64/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold:
> > /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by
> > /cluster/software/binutils/2.37-GCCcore-11.2.0/bin/ld.gold)
> >
> > Not sure if it is related to discussion in this post (
> > https://gitlab.com/petsc/petsc/-/issues/997), but after I tried the
> > configure option --with-cxx=0, I still got the same errors.
> > My make.log file for compiling petsc is attached here. Also, the
> > dependencies of the compiled petsc is:
> >
> > >>: ldd arch-linux-c-debug/lib/libpetsc.so
> > linux-vdso.so.1 =>  (0x7ffd80348000)
> > libflexiblas.so.3 =>
> > /cluster/software/FlexiBLAS/3.0.4-GCC-11.2.0/lib/libflexiblas.so.3
> > (0x7f6e8b93f000)
> > libpthread.so.0 => /usr/lib64/libpthread.so.0 (0x7f6e8b723000)
> > libm.so.6 => /usr/lib64/libm.so.6 (0x7f6e8b421000)
> > libdl.so.2 => /usr/lib64/libdl.so.2 (0x7f6e8b21d000)
> > libmpi_usempif08.so.40 =>
> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempif08.so.40
> > (0x7f6e8fd92000)
> > libmpi_usempi_ignore_tkr.so.40 =>
> >
> /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_usempi_ignore_tkr.so.40
> > (0x7f6e8fd84000)
> > libmpi_mpifh.so.40 =>
> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi_mpifh.so.40
> > (0x7f6e8fd0c000)
> > libmpi.so.40 =>
> /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libmpi.so.40
> > (0x7f6e8fbfa000)
> > libgfortran.so.5 =>
> /cluster/software/GCCcore/11.2.0/lib64/libgfortran.so.5
> > (0x7f6e8af7)
> > libgcc_s.so.1 => /cluster/software/GCCcore/11.2.0/lib64/libgcc_s.so.1
> > (0x7f6e8fbe)
> > libquadmath.so.0 =>
> /cluster/software/GCCcore/11.2.0/lib64/libquadmath.so.0
> > (0x7f6e8af28000)
> > libc.so.6 => /usr/lib64/libc.so.6 (0x7f6e8ab5a000)
> > /lib64/ld-linux-x86-64.so.2 (0x7f6e8fbb3000)
> > libopen-rte.so.40 =>
> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-rte.so.40
> > (0x7f6e8aa9e000)
> > libopen-orted-mpir.so =>
> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-orted-mpir.so
> > (0x7f6e8fbdb000)
> > libopen-pal.so.40 =>
> > /cluster/software/OpenMPI/4.1.1-GCC-11.2.0/lib/libopen-pal.so.40
> > (0x7f6e8a9ea000)
> > librt.so.1 => /lib64/librt.so.1 (0x7f6e8a7d5000)
> > libutil.so.1 => /lib64/libutil.so.1 (0x7f6e8a5d2000)
> > libhwloc.so.15 =>
> > /cluster/software/hwloc/2.5.0-GCCcore-11.2.0/lib/libhwloc.so.15
> > (0x7f6e8a575000)
> > libpciaccess.so.0 =>
> > /cluster/software/libpciaccess/0.16-GCCcore-11.2.0/lib/libpciaccess.so.0
> > (0x7f6e8a56a000)
> > libxml2.so.2 =>
> > /cluster/software/libxml2/2.9.10-GCCcore-11.2.0/lib/libxml2.so.2
> > (0x7f6e8a3f6000)
> > libz.so.1 => /cluster/software/zlib/1.2.11-GCCcore-11.2.0/lib/libz.so.1
> > (0x7f6e8a3dd000)
> > liblzma.so.5 =>
> /cluster/software/XZ/5.2.5-GCCcore-11.2.0/lib/liblzma.so.5
> > (0x7f6e8a3b5000)
> > libevent_core-2.0.so.5 => /lib64/libevent_core-2.0.so.5
> (0x7f6e8a18a000)
> > libevent_pthreads-2.0.so.5 => /lib64/libevent_pthreads-2.0.so.5
> > (0x7f6e89f87000)
> >
> > Thanks very much,
> > Jianbo
> >
> > On Mon, Nov 7, 2022 at 6:01 PM Satish Balay  wrote:
> >
> > > Glad you have it working. Thanks for 

[petsc-users] TSBEULER vs TSPSEUDO

2022-11-08 Thread Francesc Levrero-Florencio
Hi PETSc people,

We are running highly nonlinear quasi-static (steady-state) mechanical finite 
element problems with PETSc, currently using TSBEULER and the basic time adapt 
scheme.

What we do in order to tackle these nonlinear problems is to parametrize the 
applied loads with the time in the TS and apply them incrementally. While this 
usually works well, we have seen instances in which the adaptor would reject 
the time step according to the calculated truncation errors, even if the SNES 
converges in a small number of iterations. Another issue that we have recently 
observed is that in a sequence of converged time steps the adaptor decides to 
start cutting the time step to smaller and smaller values using the low clip 
default value of TSAdaptGetClip (again because the truncation errors are high 
enough). What can we do in order to avoid these issues? The first one is 
avoided by using TSAdaptSetAlwaysAccept, but the latter remains. We have tried 
setting the low clip value to its maximum accepted value of 1, but then the 
time increment does not increase even if the SNES always converges in 3 or 4 
iterations. Maybe a solution is to increase the tolerances of the TSAdapt?

Another potential solution we have recently tried in order to tackle these 
issues is using TSPSEUDO (and deparametrizing the applied loads), but generally 
find that it takes a much longer time to reach an acceptable solution compared 
with TSBEULER. We have mostly used the default KSPONLY option, but we'd like to 
explore TSPSEUDO with NEWTONLS. A first question would be: what happens if the 
SNES fails to converge, does the solution get updated somehow in the 
corresponding time step? We have performed a few tests with TSPSEUDO and 
NEWTONLS, setting the maximum number of SNES iterations to a relatively low 
number (e.g. 5), and then always setting the SNES as converged in the poststage 
function, and found that it performs reasonably well, at least better than with 
the default KSPONLY (does this make any sense?).

Thanks a lot!

Regards,
Francesc.