El 30/04/2014, a las 21:06, Steve Ndengue escribió:

> I am presently doing a run with a 2000*2000 matrix and it took about 20 mins 
> (obtained from code timing routines and not PETSC - PETSC informations are in 
> the log). 
> I am expecting to be able to do calculations for matrices up to 100000*100000 
> with multiprocessors and more if the resources allow it with. 
> 
> The matrix has zero entries but they are generally less than 50% from the 
> total number of matrix elements; so I would guess it is         not exactly 
> sparse.
> The log file is joined to this message.
> 
> Sincerely,

Please reply to the list.

You should not use a debug build, see the warning notice in the output.

The eigensolve takes 6.3 seconds, and most of the time is in factoring the 
matrix. 

The matrix seems to be almost full. You need to do preallocation:
http://www.mcs.anl.gov/petsc/documentation/faq.html#efficient-assembly

SLEPc is appropriate for sparse matrices. If the matrices are not sparse then 
the methods are likely not appropriate.

Jose


> 
> 
> On 04/30/2014 01:30 PM, Jose E. Roman wrote:
>> El 30/04/2014, a las 17:25, Steve Ndengue escribió:
>> 
>> 
>>> Yes, the matrix is sparse. 
>>> 
>> How sparse?
>> How are you running the solver?
>> Where is the time spent (log_summary)?
>> 
>> Jose
>> 
>> 
>> 
>>> 
>>> On 04/30/2014 10:19 AM, Jose E. Roman wrote:
>>> 
>>>> El 30/04/2014, a las 17:10, Steve Ndengue escribió:
>>>> 
>>>> 
>>>> 
>>>>> Dear all,
>>>>> 
>>>>> I have few questions on achieving convergence with SLEPC.
>>>>> I am doing some comparison on how SLEPC performs compare to a LAPACK 
>>>>> installation on my system (an 8 processors icore7 with 3.4 GHz running 
>>>>> Ubuntu).
>>>>> 
>>>>> 1/ It appears that a calculation requesting the LAPACK eigensolver runs 
>>>>> faster using my libraries than when done with SLEPC selecting the 
>>>>> 'lapack' method. I guess most of the time is spent when assembling the 
>>>>> matrix? However if the time seems reasonable for a matrix of size less 
>>>>> than 2000*2000, for one with 4000*4000 and above, the computation time 
>>>>> seems more than ten times slower with SLEPC and the 'lapack' method!!!
>>>>> 
>>>>> 
>>>> Once again, do not use SLEPc's 'lapack' method, it is just for debugging 
>>>> purposes.
>>>> 
>>>> 
>>>> 
>>>>> 2/ I was however expecting that running an iterative calculation such as 
>>>>> 'krylovschur', 'lanczos' or 'arnoldi' the time would be shorter but that 
>>>>> is not the case. Inserting the Shift-and-Invert spectral transform, i 
>>>>> could converge faster for small matrices but it takes more time using 
>>>>> these iteratives methods than using the Lapack library on my system, when 
>>>>> the size allows; even when       requesting only few eigenstates (less 
>>>>> than 50).
>>>>> 
>>>>> 
>>>> Is your matrix sparse?
>>>> 
>>>> 
>>>> 
>>>> 
>>>>> regarding the 2 previous comments I would like to know if there are some 
>>>>> rules on how to ensure a fast convergence of a diagonalisation with SLEPC?
>>>>> 
>>>>> 3/ About the diagonalisation on many processors, after we assign values 
>>>>> to the matrix, does SLEPC automatically distribute the calculation among 
>>>>> the requested processes or shall we need to insert commands on the code 
>>>>> to enforce it?
>>>>> 
>>>>> 
>>>> Read the manual, and have a look at examples that work in parallel (most 
>>>> of them).
>>>> 
>>>> 
>>>> 
>>>>> Sincerely,
>>>>> 
>>>>> 
>>>>>  
>>>>> -- 
>>>>> Steve
>>>>> 
>>>>> 
>>>>> 
>>> 
>>> -- 
>>> Steve A. Ndengué
>>> ---
>>> 
>>> 
> 
> 
> -- 
> Steve A. Ndengué
> ---
> Postdoctoral Fellow
> Department of Chemistry
> Missouri University of Science and Technology
> ----
> 
> <log.txt>

Reply via email to