Hi everyone,
I am trying to add a term to a the stiffness matrix which is V and it
appears as follows:
for (unsigned int i=0; i<phi.size(); i++){
for (unsigned int j=0; j<phi.size(); j++)
{
Me(i,j) += JxW[qp]*phi[i][qp]*phi[j][qp];
Ke(i,j) += JxW[qp]*(dphi[i][qp]*dphi[j][qp] +
phi[i][qp]*phi[j][qp]* V );
}
The error that I am getting is:
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/petsc-as/documentation/troubleshooting.html#Signal[0]PETSCERROR:
or try
http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
corruption errors
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and
run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: --------------------- Error Message
------------------------------------
[0]PETSC ERROR: Signal received!
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: Petsc Release Version 3.1.0, Patch 3, Fri Jun 4 15:34:52
CDT 2010
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: ./ex100-opt on a linux-gnu named ubuntu by omabbasi Wed Apr
13 17:19:17 2011
[0]PETSC ERROR: Libraries linked from
/build/buildd/petsc-3.1.dfsg/linux-gnu-c-opt/lib
[0]PETSC ERROR: Configure run at Fri Sep 10 05:10:39 2010
[0]PETSC ERROR: Configure options --with-shared --with-debugging=0
--useThreads 0 --with-clanguage=C++ --with-c-support
--with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi
--with-mpi-shared=1 --with-blas-lib=-lblas --with-lapack-lib=-llapack
--with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse
--with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]"
--with-spooles=1 --with-spooles-include=/usr/include/spooles
--with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1
--with-hypre-dir=/usr --with-scotch=1
--with-scotch-include=/usr/include/scotch
--with-scotch-lib=/usr/lib/libscotch.so --with-hdf5=1 --with-hdf5-dir=/usr
[0]PETSC ERROR:
------------------------------------------------------------------------
[0]PETSC ERROR: User provided function() line 0 in unknown directory unknown
file
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI COMMUNICATOR 3 DUP FROM
0
with errorcode 59.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
I expect that I am getting this error because V is not written in a way that
PETSc recognize. I would appreciate any suggestions.
Not sure if this piece of information is important but let me write it
anyway. V is calculated by the following code:
Real sigma = 3.1575;
Real epsilon = 0.0450;
Real V1[131];
Real V;
const Real x = q_point[qp](0);
const Real y = q_point[qp](1);
const Real z = q_point[qp](2);
for (int i=0; i<nr; i++){
V1[i] = -4 * epsilon * pow(sigma,6) /
sqrt((pow(x-xx[i][0],2))+(pow(x-xx[i][1],2))+(pow(x-xx[i][2],2)))
+4 * epsilon * pow(sigma,12) /
sqrt((pow(x-xx[i][0],2))+(pow(x-xx[i][1],2))+(pow(x-xx[i][2],2)));
}
for(int i = 0; i<nr; ++i) {
V += V1[i];
}
if (V > 1.e8){ V = 1.e8;}
Thanks in advance,
Omar
------------------------------------------------------------------------------
Benefiting from Server Virtualization: Beyond Initial Workload
Consolidation -- Increasing the use of server virtualization is a top
priority.Virtualization can reduce costs, simplify management, and improve
application availability and disaster protection. Learn more about boosting
the value of server virtualization. http://p.sf.net/sfu/vmware-sfdev2dev
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users