El 09/08/2011, a las 09:54, Shitij Bhargava escribi?:

> Thanks Jose, Barry.
> 
> I tried what you said, but that gives me an error:
> 
> [0]PETSC ERROR: --------------------- Error Message 
> ------------------------------------
> [0]PETSC ERROR: Argument out of range!
> [0]PETSC ERROR: Can only get local values, trying 9!
> 
>  This is probably because here I am trying to insert all rows of the matrix 
> through process 0, but process 0 doesnt own all the rows.
> 
> In any case, this seems very "unnatural", so I am using MPIAIJ the right way 
> as you said, where I assemble the MPIAIJ matrix in parallel instead of only 
> on one process. I have done that actually, and am running the code on the 
> cluster right now. Its going to take a long long time to finish, so I cant 
> confirm some of my doubts, which I am asking below:
> 
> 1. If I run the code with 1 process, and say it takes M memory (peak) while 
> solving for eigenvalues, then when I run it with N processes, each will take 
> nearly M/N memory (peak) (probably a little more) right ? And for doing this, 
> I dont have to use any special MPI stuff....the fact that I am using MPIAIJ, 
> and building the EPS object from it, and then calling EPSSolve() is enough ? 
> I mean EPSSolve() is internally in some way distributing memory and 
> computation effort automatically when I use MPIAIJ, and run the code with 
> many processes, right ?
> This confusion is there because when I use top, while running the code with 8 
> processes, each of them showed me nearly 250 mb initially, but each has grown 
> to use 270 mb in about 70 minutes. I understand that the method krylovschur 
> is such that memory requirements increase slowly, but the peak on any process 
> will be less (than if I ran only one process), right ?  (Even though their 
> memory requirements are growing, they will grow to some M/N only, right ?)

The solver allocates some dynamic memory when the actual computation starts, so 
it is normal that you see a growth in the memory footprint. No further increase 
should be observed afterwards.

Jose 

> 
> Actually the fact that in this case, each of the process creates its own EPS 
> context, initializes it itself, and then calls EPSSolve() itself without any 
> "interaction" with other processes makes me wonder if they really are working 
> together, or just individually (I would have verified this myself, but the 
> program will take way too much time, and I know I would have to kill it 
> sooner or later).....or the fact that they initialize their own EPS context 
> with THEIR part of the MPI is enough to make them "cooperate and work 
> together" ? (Although I think this is what Barry meant in that last post, but 
> I am not too sure)
> 
> I am not too comfortable with the MPI way of thinking right now, probably 
> this is why I have this confusion.
> 
> Anyways, I cant thank you guys enough. I would have been scrounging through 
> documentation again and again to no avail if you guys had not helped me the 
> way you did. The responses were always prompt, always to the point (even 
> though my questions were sometimes not, probably because I didnt completely 
> understand the problems I was facing.....but you always knew what I was 
> asking) and very clear. At this moment, I dont know much about PETSc/SLEPc 
> myself, but I will be sure to contribute back to this list when I do. I have 
> nothing but sincere gratitude for you guys.
> 
> 
> Thank you very much !
> 
> Shitij

Reply via email to