This code puts the error in x:
call VecAXPY(x,neg_one,u,ierr)
I suspect you printed these numbers after this statement.
On Mon, Jan 14, 2019 at 8:10 PM Maahi Talukder via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hello all,
>
> I complied and run the example *ex2f.F90* located in
>
Fande:
According to this PR
https://bitbucket.org/petsc/petsc/pull-requests/1061/a_selinger-feature-faster-scalable/diff
Should we set the scalable algorithm as default?
Sure, we can. But I feel we need do more tests to compare scalable and
non-scalable algorithms.
On theory, for small to
> On Jan 14, 2019, at 7:26 AM, Matthew Knepley via petsc-users
> wrote:
>
> On Mon, Jan 14, 2019 at 7:55 AM Yaxiong Chen wrote:
> So must I figure out the index of the zero columns and row to get the null
> space first, And then I remove it to generator Cholesky or LU
> preconditionor?Is
Hi Hong,
According to this PR
https://bitbucket.org/petsc/petsc/pull-requests/1061/a_selinger-feature-faster-scalable/diff
Should we set the scalable algorithm as default?
Thanks,
Fande Kong,
On Fri, Jan 11, 2019 at 10:34 AM Zhang, Hong via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Add
This time, it crashes at
[6]PETSC ERROR: #1 MatTransposeMatMultSymbolic_MPIAIJ_MPIAIJ() line 1989 in
/lustre/home/vef002/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c
ierr = PetscMalloc1(bi[pn]+1,);
which allocates local portion of B^T*A.
You may also try to increase number of cores to reduce
The memory requested is an insane number. You may need to use 64 bit
integers.
On Mon, Jan 14, 2019 at 8:06 AM Sal Am via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> I ran it by: mpiexec -n 8 valgrind --tool=memcheck -q --num-callers=20
> --log-file=valgrind.log-osa.%p ./solveCSys -malloc
On Mon, Jan 14, 2019 at 7:55 AM Yaxiong Chen wrote:
> So must I figure out the index of the zero columns and row to get the
> null space first, And then I remove it to generator Cholesky or LU
> preconditionor?Is this case, should the nontrivial null space be (1,0,0,0)?
>
No
1) If you have a