If you are using the default DACreate3d() with a stencil width of one and a 
star stencil then each row of the matrix has 7*6*6 nonzeros because it has to 
assume each each of the six dof is coupled with any of the six dof in any of 
the six directions. 

   The number of nonzeros in the matrix is then roughly 96 x 25 x 22 * 6 * 
7*6*6, with the AIJ format it takes essentially 12 bytes per nonzero so you end 
up with 958,003,200 = 1 gigabyte. 

   If you used the DA_STENCIL_BOX then the 7 is replaced with 27 so it takes 
roughly 3.7 gigabytes for a matrix.   

   If you used --with-64-bit-indices=1 when configuring PETSc then the 12 is 
replaced with 16 and the matrix would take 1.3 gigabytes. 

   If you used the default PC ILU(0) then the preconditioner takes basically 
the same amount of space as the matrix which explains why the memory usage 
would double during the solve.

    Unless my computations above are wrong I cannot explain why it is taking 
2.5 g just for the original matrix. Did you ever put into the matrix a row 
using more then the stencil width that you provided? For example if you set a 
stencil width of 1 but on some boundaries you actually put entries into the 
matrix of stencil width 2 this would blow up the memory usage a lot. Or are you 
using a stencil width of 2?

   How do you know it is using 2.5 gig originally? 

   Regardless: if the little 6 by 6 blocks are actually dense then you should 
switch to the BAIJ format, since the memory usage drops from 12 bytes per 
nonzero to essentially 8 bytes per nonzero. If the 6 by 6 blocks are sparse 
which is very likely then you should use the routine DASetBlockFills() to 
indicate the sparsity of these little blocks. This can save you an enormous 
amount of memory.

   Barry


On Nov 27, 2010, at 7:09 PM, Li, Zhisong (lizs) wrote:

>  Hi, Petsc Team,
> 
> I have a Petsc code calculating simple structured grid. The dimension is 96 x 
> 25 x 22 with DOF = 6. When running in parallel or sequential processing on a 
> shared memory machine, it costs about 2.5GB RAM when assembling the matrix 
> and more than 5GB when KSP is solving with GMRES. In the code I use the basic 
> DAGetmatrix(da, MATAIJ, &M) to define the matrix, and it is supposed to be a 
> sparse matrix. Is this normal for Petsc application? If not, how to improve 
> this?
> 
> 
> Thank you.
> 
> 
> Zhisong Li

Reply via email to