This Message Is From an External Sender
This message came from outside your organization.
Dear All,
I am implementing a linear solver interface in a flow solver with support for PETSc. My application uses a parallel CSR representation and it manages the memory for it. I would like to wrap PETSc matrices (and vectors) around it such that I can use the PETSc solvers as well. I plan to use MatMPIBAIJSetPreallocationCSR and VecCreateMPIWithArray for lightweight wrapping. The matrix structure is static over the course of iterations. I am using a derived context class to host the PETSc related context. This context holds references to the PETSc matrix and vectors and KSP/PC required to call the solver API later in the iteration loop. I would like to create as much as possible during creation of the context at the beginning of iterations (the context will live through iterations). My understanding is that MatMPIBAIJSetPreallocationCSR and VecCreateMPIWithArray DO NOT copy such that I can wrap the PETSc types around the memory managed by the hosting linear solver framework in the application. The system matrix and RHS (the pointers to these arrays are passed to MatMPIBAIJSetPreallocationCSR and VecCreateMPIWithArray, respectively) is assembled by the application before any call to a linear solver. Given this setting: for every iteration, my plan is the PETSc information from the context (Mat, Vec, KSP) and simply call KSPSolve without any other PETSc calls (still assuming the matrix structure is static during iteration). What is not clear to me: Are there any MatSetValues/VecSetValues calls followed by MatAssembly/VecAssembly(Begin/End) calls required for this setting? The data in the arrays for which pointers have been passed to MatMPIBAIJSetPreallocationCSR and VecCreateMPIWithArray is computed prior to any solver call in an iteration, such that I am assuming no additional "set value" calls through PETSc are required -> am I missing something important by assuming this? Thank you for taking the time! -- fabs