Hello,

I have been using the LinearAlgebra::distributed::Vector class for MPI 
parallelization since the way it works is more familiar to what I had 
worked with and seemed more flexible.

However, for parallelization, I have to either use a Trilinos or PETSc 
matrix since the native deal.II SparseMatrix is only serial (correct me if 
I'm wrong). Seems like I can do matrix-vector multiplications just fine 
between LA::dist::Vector and the wrapped matrices. However, when it gets to 
LinearOperator, it looks like a TrilinosWrappers::SparseMatrix wrapped 
within a LinearOperator only works with a TrilinosWrappers::MPI::Vector, 
and same thing for PETSc.

I am wondering what the community is using as their go-to parallel matrices 
and vectors, and if you've been mixing them. E.g. matrix-free with 
Trilinos/PETSc vectors, or PETSc matrices with LA::dist::Vector. From what 
I've seen from some tutorials, there is a way to code it up such that 
either Trilinos or PETSc wrappers are used interchangeably, but the 
LA::dist::Vector does not seem be nicely interchangeable with the 
Trilinos/PETSc ones. 

I was kind of hoping to be able to use LA::dist::Vector for everything, am 
I expecting too much from it? Maybe I just need to fix the LinearOperator 
implementation to mix-and-match the data structure? If I do commit to 
Trilinos matrices/vectors, will I have trouble doing some matrix-free or 
GPU stuff in the far future?

Best regards,

Doug

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/af2ac47d-ada5-4d1e-8119-d06743ebd0a7%40googlegroups.com.

Reply via email to