Hi, thanks for your answer. I have measure the time it takes for
PETScWrappers::MPI::Vector, parallel::distributed::Vector< Number > and
Vector< Number > to complete a very simple task consisting of accessing the
elements of these vectors. Something like this (repeating this whole
process 15 times for averaging results, and using very big vector sizes):
double a;
for(unsigned int i=0; i<v.size(); ++i)
a = v[i];
I'm running it with a single process, and the results are:
+------------------------------------------------------------+------------+---------------------------------+
| Total wallclock time elapsed since start | 34.4s |
|
| |
| |
| Section | no. calls |
wall time | % of total |
+-----------------------------------------------------------+--------------+--------------+-----------------+
| Dealii parallel | 15 |
0.0421s | 0.12% |
| Dealii serial | 15
| 0.018s | 0.052% |
| PETSc wrapper | 15 |
34.3s | 1e+02% |
+------------------------------------------------------------+-----------+----------------+-----------------+
Which shows that the PETSc wrapper is ~1000 times slower accessing its
elements than the others (even local elements as I'm running a single
process, so it's not a communication issue). If for example I run it in
parallel using 2 processes, the parallel vectors do their job in about half
the time, but the factor 1000 is simply to big to overcome. The problem I
find is that the use of PETSc wrappers is mandatory for using parallel
solvers. Is it normal this huge difference in performance? Is there any
work-around in the use of PETSc wrappers when dealing with solvers and
other parallel classes?
David.
On Friday, 26 August 2016 14:05:55 UTC+2, Bruno Turcksin wrote:
>
> Hi,
>
> I guess it's more a question of preference. What I do is using the same
> vector type as the matrix type: PETSc matrix -> PETSc vector, Trilinos
> matrix -> Trilinos vector, matrix-free -> deal.II vector. deal.II vector
> can use multithreading unlike PETSc vector but if you are using MPI, I
> don't think that you will see a big difference.
>
> Best,
>
> Bruno
>
> On Thursday, August 25, 2016 at 5:30:31 PM UTC-4, David F wrote:
>>
>> Hello, I would like to know if among PETScWrappers::MPI::Vector and
>> parallel::distributed::Vector< Number >, one of them is preferred over the
>> other. They both seem to have a similar functionality and a similar
>> interface. Although parallel::distributed::Vector< Number > has a bigger
>> interface, PETScWrappers::MPI::Vector is extensively used in the examples.
>> In which situations should we use each of them? Is there any known
>> difference in performance? Thanks.
>>
>
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.