Hi, thanks for your answer. I have measured the time it takes for 
PETScWrappers::MPI::Vector, parallel::distributed::Vector< Number > and 
Vector< Number > to complete a very simple task, consisting in simple 
accessing the elements and assigning them to another variable, something 
like:

double a;
for(...)


On Friday, 26 August 2016 14:05:55 UTC+2, Bruno Turcksin wrote:
>
> Hi,
>
> I guess it's more a question of preference. What I do is using the same 
> vector type as the matrix type: PETSc matrix -> PETSc vector, Trilinos 
> matrix -> Trilinos vector, matrix-free -> deal.II vector. deal.II vector 
> can use multithreading unlike PETSc vector but if you are using MPI, I 
> don't think that you will see a big difference.
>
> Best,
>
> Bruno
>
> On Thursday, August 25, 2016 at 5:30:31 PM UTC-4, David F wrote:
>>
>> Hello, I would like to know if among PETScWrappers::MPI::Vector and 
>> parallel::distributed::Vector< Number >, one of them is preferred over the 
>> other. They both seem to have a similar functionality and a similar 
>> interface. Although parallel::distributed::Vector< Number > has a bigger 
>> interface, PETScWrappers::MPI::Vector is extensively used in the examples. 
>> In which situations should we use each of them? Is there any known 
>> difference in performance? Thanks.
>>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to