El 30/04/2014, a las 17:25, Steve Ndengue escribió:

> Yes, the matrix is sparse. 

How sparse?
How are you running the solver?
Where is the time spent (log_summary)?

Jose


> 
> 
> On 04/30/2014 10:19 AM, Jose E. Roman wrote:
>> El 30/04/2014, a las 17:10, Steve Ndengue escribió:
>> 
>> 
>>> Dear all,
>>> 
>>> I have few questions on achieving convergence with SLEPC.
>>> I am doing some comparison on how SLEPC performs compare to a LAPACK 
>>> installation on my system (an 8 processors icore7 with 3.4 GHz running 
>>> Ubuntu).
>>> 
>>> 1/ It appears that a calculation requesting the LAPACK eigensolver runs 
>>> faster using my libraries than when done with SLEPC selecting the 'lapack' 
>>> method. I guess most of the time is spent when assembling the matrix? 
>>> However if the time seems reasonable for a matrix of size less than 
>>> 2000*2000, for one with 4000*4000 and above, the computation time seems 
>>> more than ten times slower with SLEPC and the 'lapack' method!!!
>>> 
>> Once again, do not use SLEPc's 'lapack' method, it is just for debugging 
>> purposes.
>> 
>> 
>>> 2/ I was however expecting that running an iterative calculation such as 
>>> 'krylovschur', 'lanczos' or 'arnoldi' the time would be shorter but that is 
>>> not the case. Inserting the Shift-and-Invert spectral transform, i could 
>>> converge faster for small matrices but it takes more time using these 
>>> iteratives methods than using the Lapack library on my system, when the 
>>> size allows; even when       requesting only few eigenstates (less than 50).
>>> 
>> Is your matrix sparse?
>> 
>> 
>> 
>>> regarding the 2 previous comments I would like to know if there are some 
>>> rules on how to ensure a fast convergence of a diagonalisation with SLEPC?
>>> 
>>> 3/ About the diagonalisation on many processors, after we assign values to 
>>> the matrix, does SLEPC automatically distribute the calculation among the 
>>> requested processes or shall we need to insert commands on the code to 
>>> enforce it?
>>> 
>> Read the manual, and have a look at examples that work in parallel (most of 
>> them).
>> 
>> 
>>> Sincerely,
>>> 
>>> 
>>>  
>>> -- 
>>> Steve
>>> 
>>> 
> 
> 
> -- 
> Steve A. Ndengué
> ---
> 

Reply via email to