Hello,
I want to solve many symmetric linear systems one after another in parallel
using boomerAMG + KSPCG and need to make the matrix transfer more efficient.
Matrices are symmetric in structure and values. boomerAMG + KSPCG work fine.
So far I have been loading the entire matrices but I read
Hello,
I try to use the Hypre boomeramg preconditioner but I keep getting type
conversion errors like this one for PetscOptionsSetValue(...):
Fehler: »const char*« kann nicht nach »PetscOptions« {aka »_n_PetscOptions*«}
umgewandelt werden
PetscOptionsSetValue("-pc_hypre_boomeramg_no_CF","tru
8, 15:12:58 MEZ hat Dave May
Folgendes geschrieben:
On Mon, 3 Dec 2018 at 13:52, Klaus Burkart via petsc-users
wrote:
Hello,
I want to solve a cfd case, after decomposition, I get a sub matrix allocated
to each process. The example below shows how the data is allocated to the
processes
Hello,
I try to integrate petsc into an application and I think it would be much
simpler if I could bypass the applications original MPI functionality by
starting MPI with n processes when initializing Petsc and stopping it when
PetscFinalize(); is called. The standard mpirun -np 4 application
Hello,
I want to solve a cfd case, after decomposition, I get a sub matrix allocated
to each process. The example below shows how the data is allocated to the
processes (the sample data includes only the lower parts of the matrices). Row
and column addresses are local.
What petsc program setup
Hello,
I use the following routine to transfer the petsc result back to my application
which works fine using one core as the local parts of the linear system ==
global parts:
VecGetArray(petsc_x, &array);
for (i=rstart; i < rend; ++i) {
application_x[i] = array[i];
}
e
PetscSynchronizedPrintf() to have each process print its own values.
Barry
> On Nov 23, 2018, at 6:44 AM, Klaus Burkart via petsc-users
> wrote:
>
> Hello,
>
> I am trying to compute the local row ranges allocated to the processes i.e.
> rstart and rend of each
The output is:
local_size = 25, on process 0
rstart = 0, on process 0
rend = 25, on process 0
local_size = 25, on process 1
rstart = 0, on process 1
rend = 25, on process 1
local_size = 25, on process 2
rstart = 0, on process 2
rend = 25, on process 2
local_size = 25, on process
Hello,
I am trying to compute the local row ranges allocated to the processes i.e.
rstart and rend of each process, needed as a prerequisite for
MatMPIAIJSetPreallocation using d_nnz and o_nnz.
I tried the following:
...
PetscInitialize(0,0,PETSC_NULL,PETSC_NULL);
MPI_Comm_size(PETS
Hello,
I am trying to initialize petsc from within application code but I keep getting
the following error.
symbol lookup error:
/home/klaus/OpenFOAM/klaus-5.0/platforms/linux64GccDPInt32Opt/lib/libmyPCG2.so:
undefined symbol: PetscInitialize
The code is just:
PetscInitialize(0,0,PETSC_N
gt; On Apr 20, 2018, at 10:05 AM, Klaus Burkart wrote:
>
> I think I understood the matrix structure for parallel computation with the
> rows, diagonal (d) and off-diagonal (o) structure, where I have problems is
> how to do the setup including memory allocation in PETSc:
>
Hi,
How can I preallocate space for matrices (some symmetric, others asymmetric) if
I have the global number of nonzeros (NNZ) but not the number of nonzeros per
row? I could compute the NNZ for the upper or lower part separately if this
would be useful for symmetric matrices.
I create the mat
I am confused with matrix object types and which one to use in my case:
I want to solve a linear system using multiple processes. The steps I am
struggling with is the import of the global matrix and global rhs vector into
applicable PETSc objects.
The global matrix is currently stored in CSR
then saves
it with MatView() and a binary viewer. You can then load the matrix easily and
efficiently in PETSc in parallel with MatLoad
3) If you have matrix already in CSR format you can use
MatCreateSeqAIJWithArrays()
Barry
> On Sep 17, 2017, at 9:25 AM, Klaus Burkart wrote:
The matrix import function looks like this:
void csr2pet
(
const Foam::lduMatrix & matrix,
petsc_declaration & petsc_matrix // How to declare the PETsc matrix
to be filled?
)
{
int n = matrix.diag().size(); // small case n = 40800
int nnz = matrix.lower().size() + matrix.upper().
15 matches
Mail list logo