Hello guys,
I am getting this error while using fieldsplit:
[3]PETSC ERROR: - Error Message
--
*[3]PETSC ERROR: Nonconforming object sizes[3]PETSC ERROR: Local column
sizes 6132 do not add up to total number of
Hi,
What reference simplex is DMPlexComputeCellGeometryAffineFEM using in 2 and 3D?
I am used to computing my shape functions on the unit simplex (vertices at the
origin and each e_i), but it does not look to be the reference simplex in this
function:
In 3D, for the unit simplex with vertices
This is from our DMCreateFieldDecomposition_Moose routine. The IS size on
process 1 (which is the process from which I took the error in the original
post) is reported as 4129 which is consistent with the row size of A00.
Split '0' has local size 4129 on processor 1
Split '0' has local size 4484
Francesc Levrero-Florencio writes:
> Hi Jed,
>
> Thanks for the answer.
>
> We do have a monolithic arc-length implementation based on the TS/SNES logic,
> but we are also exploring having a custom SNESSHELL because the arc-length
> logic is substantially more complex than that of traditional
Hi Jed,
Thanks for the answer.
We do have a monolithic arc-length implementation based on the TS/SNES logic,
but we are also exploring having a custom SNESSHELL because the arc-length
logic is substantially more complex than that of traditional load-controlled
continuation methods. It works
On Tue, Nov 8, 2022 at 12:05 PM Edoardo alinovi
wrote:
> Hello Guys,
>
> Thanks to your suggestions on the block matrices, my fully coupled solver
> is proceeding very well!
>
> I am now about to take advantage of the block structure of the matrix
> using PCFIELDSPLIT. I have learned a bit from
First, I believe arc-length continuation is the right approach in this problem
domain. I have a branch starting an implementation, but need to revisit it in
light of some feedback (and time has been too short lately).
My group's nonlinear mechanics solver uses TSBEULER because it's convenient
Hello Guys,
Thanks to your suggestions on the block matrices, my fully coupled solver
is proceeding very well!
I am now about to take advantage of the block structure of the matrix
using PCFIELDSPLIT. I have learned a bit from the user manual and followed
with interest this discussion in the
Here are the ldd outputs:
>> ldd petsc_3.18_gnu/arch-linux-c-debug/lib/libpetsc.so
linux-vdso.so.1 => (0x7f23e5ff2000)
libflexiblas.so.3 =>
/cluster/software/FlexiBLAS/3.0.4-GCC-11.2.0/lib/libflexiblas.so.3
(0x7f23e1b6)
libpthread.so.0 => /usr/lib64/libpthread.so.0
On Tue, 8 Nov 2022, Satish Balay via petsc-users wrote:
> You don't see 'libstdc++' in the output from 'ldd libptsc.so' below - so
> there is no reference
> to libstdc++ from petsc
>
> Try a clean build of PETSc and see if you still have these issues.
>
> ./configure --with-cc=gcc --with-cxx=0
On Tue, Nov 8, 2022 at 10:28 AM Jianbo Long wrote:
> I am suspecting something else as well ...
>
> Could you elaborate more about "mixing c++ codes compiled with
> /usr/bin/g++ and compilers in /cluster/software/GCCcore/11.2.0" ? My own
> Fortran code does not have any c++ codes, and for some
You don't see 'libstdc++' in the output from 'ldd libptsc.so' below - so there
is no reference
to libstdc++ from petsc
Try a clean build of PETSc and see if you still have these issues.
./configure --with-cc=gcc --with-cxx=0 --with-fc=gfortran
--download-fblaslapack --download-mpich
Another
I am suspecting something else as well ...
Could you elaborate more about "mixing c++ codes compiled with /usr/bin/g++
and compilers in /cluster/software/GCCcore/11.2.0" ? My own Fortran code
does not have any c++ codes, and for some reason, the compiled petsc
library is dependent on this
Hi PETSc people,
We are running highly nonlinear quasi-static (steady-state) mechanical finite
element problems with PETSc, currently using TSBEULER and the basic time adapt
scheme.
What we do in order to tackle these nonlinear problems is to parametrize the
applied loads with the time in the
14 matches
Mail list logo