The matrix import function looks like this:

void csr2pet
(
    const Foam::lduMatrix & matrix,
    petsc_declaration<Type> & petsc_matrix  // How to declare the PETsc matrix 
to be filled?
)
{
    int n = matrix.diag().size(); // small case n = 40800
    int nnz = matrix.lower().size() + matrix.upper().size() + 
matrix.diag().size(); // small case nnz = 203800

    // allocate memory for CSR sparse matrix using calloc
    ScalarType * vals = (ScalarType *)calloc(nnz, sizeof(ScalarType));
    uint * cols = (uint *)calloc(nnz, sizeof(uint));
    uint * rows = (uint *)calloc(n, sizeof(uint));

    // call function to convert original LDU matrix to CSR format
    exPet::ldu2csr(matrix,rows,cols,vals);

    // fill PETsc matrix
    MatSetValues(petsc_matrix, ?, ?, ?, ?, ?, INSERT_VALUES);

    // free and release the matrix memory
    free(rows); free(cols); free(vals);  // calloc()
}


Questions:

1: How to declare the petsc_matrix to be filled by the function with the 
content of the original matrix?

2: MatSetValues(petsc_matrix, ?, ?, ?, ?, ?, INSERT_VALUES); is used to 
actually fill the petsc_matrix and I was of the opinion that PETsc uses the CSR 
format but I can't work out how CSR format is described by:

    v         - a logically two-dimensional array of values
    m, idxm     - the number of rows and their global indices
    n, idxn     - the number of columns and their global indices

My original matrix is converted to CSR format, i.e. three arrays cols 
(column_indices), rows (row_start_indices) and vals (values).

How can I load my matrix into a PETsc matrix for parallel processing? 
MatSetValues(petsc_matrix, ?, ?, ?, ?, ?, INSERT_VALUES);

Klaus

Reply via email to