Dear PETSc team, 

I'm currently working on the parallelization of the assembling of a system, 
previously assembled in a serial way (manual), but solved using PETSc in 
parallel. 
The problem I have is that when comparing computational time with the previous 
implementation, it seem that the parallel version is slower than the serial 
one. 

The type of matrices we deal with are sparse and might change their size in a 
significant order (kind of contact problems, where relations between elements 
change). 
For the example I'm using, for giving an example, the initial size of the 
matrix is : 139905, after several iteratinos it changes to: 141501 and finally 
to: 254172. 

The system is assembled and solved at each iteration and the matrix can not be 
re-used, therefore for each new iteration the matrix is set to zero keeping the 
previous non-zero pattern, and the option 'MAT_NEW_NONZERO_LOCATIONS' is set to 
'TRUE'. 
In order to do the assembling I use the function 'MatSetValues' , inserting 3 
lines and 3 rows, which might not be next to each other, and thus might no 
constitute a block. 

I believe that, what makes an important difference in time is the fact of 
adding almost the double of elements (from 139905 to 254172), but i don't know 
how what could I implement to retain a larger preallocation or to solve in any 
other way. 
I don't know, neither, in advance the position of new elements so that I can 
think in placing zeros to, maybe, generate a pre-pattern. 

Do you have any idea of how could I improve the time of the parallel version? 

Thanks in advance! 

Regards, 
Catherine 


Reply via email to