On Mon, Feb 8, 2016 at 8:55 AM, Harshad Sahasrabudhe <hsaha...@purdue.edu>
wrote:

> Hi John and Cody,
>
> Thanks for your responses. I ran the code with valgrind --leak-check=full
> --track-origins=yes, and then again with --keep-cout --redirect-stdout.
> Valgrind doesn't give me any leaks other than the MPI related. --keep-cout
> --redirect-stdout prints reference counting for every processor, but they
> still have lower number of creations than destructions.
>
> I was looking at the ReferenceCounter code and saw Thread::spin_mutex
> being used in increment and decrement counts:
>
> 174  Threads::spin_mutex::scoped_lock lock(Threads::spin_mtx
> <http://libmesh.github.io/doxygen/namespacelibMesh_1_1Threads.html#aba1ae5c24686d2cd2fc7ccdc9537c4f5>
> );
> 175  std::pair<unsigned int, unsigned int>& p = _counts
> <http://libmesh.github.io/doxygen/classlibMesh_1_1ReferenceCounter.html#a7c913252d05560f8bc7c75a63547c4c3>
> [name
> <http://libmesh.github.io/doxygen/namespacelibMesh_1_1Quality.html#a1d3617b89c15ade245910fc9a4ddfa34>
> ];
>
> Could different MPI implementations be the cause of this issue? I'm using
> the Intel IMPI library with Intel compilers when compiling.
>

Line 174 is relevant when you are running with multiple threads, it has
nothing to do with MPI.

-- 
John
------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
_______________________________________________
Libmesh-users mailing list
Libmesh-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to