Mark, Barry,
Thanks for the help. For now I'm sticking with the approach of
copying the csr matrix and using the csr data structures to do the
preallocations. I'll eventually get around to writing the code for
assembling directly into the petsc matrix. I have two more questions.
1) On 1
On Thu, Oct 29, 2009 at 1:46 PM, Chris Kees
christopher.e.kees at usace.army.mil wrote:
Mark, Barry,
Thanks for the help. For now I'm sticking with the approach of copying the
csr matrix and using the csr data structures to do the preallocations. I'll
eventually get around to writing the
I pulled the most recent version of petsc-dev and added superlu_dist.
The only case that fails is spooles on 1 processor with an mpiaij
matrix created with the Create/SetSizes/... paradigm. I would suspect
something in spooles wrapper code.
code allocating mat:
Hmm, this should not happen. The matrix should be identical in
both cases (note the initial residual is the same in both cases so
they may be identical).
Here's one more thing you can try. In the same routine create TWO
matrices; one each way, then use MatAXPY() to take the
I think I added the correct test code and got a difference of 0, but
spooles is still not returning the correct result, at least not until
the 2nd iteration. I'm attaching some of the code and output.
Chris
code for mat creates
---
On Oct 29, 2009, at 5:02 PM, Chris Kees wrote:
I think I added the correct test code and got a difference of 0, but
spooles is still not returning the correct result, at least not
until the 2nd iteration. I'm attaching some of the code and output.
We have not been actively updating
Have you run this with the debug version of PETSc? It does
additional argument testing that might catch a funny value passed in.
Barry
On Oct 29, 2009, at 5:02 PM, Chris Kees wrote:
I think I added the correct test code and got a difference of 0, but
spooles is still not
On Oct 29, 2009, at 8:43 PM, Hong Zhang wrote:
On Oct 29, 2009, at 5:02 PM, Chris Kees wrote:
I think I added the correct test code and got a difference of 0,
but spooles is still not returning the correct result, at least not
until the 2nd iteration. I'm attaching some of the code
I'm just using a simple configuration:
./config/configure.py --with-clanguage=C --with-cc='/usr/bin/mpicc -
arch x86_64' --with-cxx='/usr/bin/mpicxx -arch x86_64' --without-
fortran --with-mpi-compilers --without-shared --without-dynamic --
download-parmeti
s=ifneeded
On Oct 20, 2009, at 10:13 PM, Chris Kees wrote:
Thanks. Just to make sure I'm following, when I create the matrix I
do:
MatCreate(PETSC_COMM_WORLD,self-m);
MatSetSizes(self-m,n,n,N,N);
MatSetType(self-m,MATMPIAIJ);
MatSetFromOptions(self-m);
Chris, just a note,
Perhaps I missed something here but do something similar to you, eg
overlapping partitions, and PETSc is setup very well to be intrusive
in your code (I sometimes write a little 'addvalue' wrapper) and
assemble your FE matrices directly into a global matrix. You use the
Hi,
Our unstructured finite element code does a fairly standard
overlapping decomposition that allows it to calculate all the non-zero
column entries for the rows owned by the processor (rows 0...ldim-1 in
the local numbering). We assemble them into a local CSR matrix and
then copy them
Here's the deal. In parallel PETSc does not use a single CSR
matrix on each process to hold the MPIAIJ matrix. Hence if you store
the matrix on a process as a single CSR matrix then it has to be
copied into the PETSc datastructure. MatMPIAIJSetPreallocationCSR()
does the copy very
13 matches
Mail list logo