Hi Justin,
did you run your code through Valgrind? I suspect that this is caused by
some memory corruption, particularly as you claim that smaller matrix
sizes are fine. Also, do you check all the error codes returned by PETSc
routines?
Also, are you sure about the nonzeropattern of your matrix? From your
description it sounds like you are solving 8192 decoupled problems of
size 3x3, while for e.g. typical finite element applications you get 3x3
local mass matrices per cell, with the total number of degrees of
freedom given by the number of vertices.
If the above doesn't help: Any chance of sending us the relevant source
code?
Best regards,
Karli
On 11/27/2013 09:12 AM, Justin Dong (Me) wrote:
I am assembling a global mass matrix on a mesh consisting of 8192
elements and 3 basis functions per element (so the global dimension is
24,576. For some reason, when assembling this matrix I get tons of
floating point exceptions everywhere:
[0]PETSC ERROR: --------------------- Error Message
------------------------------------
[0]PETSC ERROR: Floating point exception!
[0]PETSC ERROR: Inserting nan at matrix entry (0,0)!
I get this error for every 3rd diagonal entry of the matrix, but the
problem is bigger than that I suspect. In my computation of the local
3x3 element mass matrices, printing out the values gives all zeros
(which is what the entries are initialized to be in my code).
I’m at a loss for how to debug this problem. My first thought was that
there is an error in the mesh, but I’m certain now that this is not the
case since the mesh is generated using the exact same routine that
generates all of my coarser meshes before this one that fails. For
example, the mesh that is one refinement level below this one has 2048
elements and works completely fine. This is how I am creating the global
matrix:
MatCreateSeqAIJ(PETSC_COMM_SELF, NLoc*nElems, NLoc*nElems, NLoc,
PETSC_NULL, &Mglobal);
where I allocate NLoc = 3 non-zero entries per row in this case.