Hi Julian,

Additional responses to your questions are included in line.

Included below are outputs from two runs, the first one is a non-MPI
application, and the second one is an MPI application.  Both codes do
essentially the same thing except that the latter has some basic MPI calls
to make it an MPI application.

For the non-MPI application, valgrind prints a banner message.  

For some reason I don't see the valgrind banner message for the MPI
application.  The only valgrind messages are from the MPI wrappers from
valgrind.

r4i0n0% icc -o mem-bug -debug mem-bug.c
r4i0n0% /contrib/valgrind/valgrind-3.8.1/bin/valgrind ./mem-bug
==9629== Memcheck, a memory error detector
==9629== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==9629== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==9629== Command: ./mem-bug
==9629==
==9629== Invalid write of size 4
==9629==    at 0x40051E: f (mem-bug.c:6)
==9629==    by 0x40052E: main (mem-bug.c:11)
==9629==  Address 0x5a70068 is 0 bytes after a block of size 40 alloc'd
==9629==    at 0x4C278FE: malloc (vg_replace_malloc.c:270)
==9629==    by 0x400508: f (mem-bug.c:5)
==9629==    by 0x40052E: main (mem-bug.c:11)
==9629==
==9629==
==9629== HEAP SUMMARY:
==9629==     in use at exit: 40 bytes in 1 blocks
==9629==   total heap usage: 1 allocs, 0 frees, 40 bytes allocated
==9629==
==9629== LEAK SUMMARY:
==9629==    definitely lost: 40 bytes in 1 blocks
==9629==    indirectly lost: 0 bytes in 0 blocks
==9629==      possibly lost: 0 bytes in 0 blocks
==9629==    still reachable: 0 bytes in 0 blocks
==9629==         suppressed: 0 bytes in 0 blocks
==9629== Rerun with --leak-check=full to see details of leaked memory
==9629==
==9629== For counts of detected and suppressed errors, rerun with: -v
==9629== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 4 from 4)
r4i0n0%
r4i0n0%
r4i0n0%
r4i0n0% mpicc -DBUG -g -O0 -o hello_mpi_c hello_mpi_c.c
/contrib/valgrind/valgrind-3.8.1/lib/valgrind/libmpiwrap-amd64-linux.so
r4i0n0%
r4i0n0% env MPIWRAP_DEBUG=verbose mpiexec_mpt -np 1
/contrib/valgrind/valgrind-3.8.1/bin/valgrind -v ./hello_mpi_c
valgrind MPI wrappers  9681: Active for pid 9681
valgrind MPI wrappers  9681: Try MPIWRAP_DEBUG=help for possible options
valgrind MPI wrappers  9681: enter PMPI_Init
valgrind MPI wrappers  9681: enter PMPI_Init_thread
valgrind MPI wrappers  9681:  exit PMPI_Init (err = 0)
valgrind MPI wrappers  9681: enter PMPI_Comm_rank
valgrind MPI wrappers  9681:  exit PMPI_Comm_rank (err = 0)
valgrind MPI wrappers  9681: enter PMPI_Comm_size
valgrind MPI wrappers  9681:  exit PMPI_Comm_size (err = 0)
valgrind MPI wrappers  9681: enter PMPI_Get_processor_name
Hello from rank 0 out of 1; procname = r4i0n0
Print something 0
valgrind MPI wrappers  9681: enter PMPI_Finalize
valgrind MPI wrappers  9681:  exit PMPI_Finalize (err = 0)
r4i0n0%

Also, included in line below are responses to your questions.

Thanks,

--Raghu



||-----Original Message-----
||From: Julian Seward [mailto:jsew...@acm.org]
||Sent: Wednesday, January 30, 2013 10:11 AM
||To: Raghu Reddy
||Cc: Valgrind-users@lists.sourceforge.net
||Subject: Re: [Valgrind-users] Is it possible to use valgrind with MPI
||applications (with SGI MPT)?
||
||
||> But the problem is I am unable to get valgrind to point out the
||> problem in the MPI code.  The output from that run is included below
||> (if it is all right, I will include the source code also):
||>
||> r31i2n2% m hello_mpi_c.c
||> #include <stdio.h>
||> #include <mpi.h>
||>
||> int main(int argc, char **argv)
||> {
||>    int ierr, myid, npes;
||>    int len;
||>    char name[MPI_MAX_PROCESSOR_NAME];
||>
||>    ierr = MPI_Init(&argc, &argv);
||> #ifdef MACROTEST
||> #define MACROTEST 10
||> #endif
||>    ierr = MPI_Comm_rank(MPI_COMM_WORLD, &myid);
||>    ierr = MPI_Comm_size(MPI_COMM_WORLD, &npes);
||>    ierr = MPI_Get_processor_name( name, &len );
||>
||>      printf("Hello from rank %d out of %d; procname = %s\n", myid,
||> npes, name);
||>
||> #ifdef MACROTEST
||>      printf("Test Macro: %d\n", MACROTEST); #endif #ifdef BUG
||>      {
||>        int* x = (int*)malloc(10 * sizeof(int));
||>        x[10] = 0;        // problem 1: heap block overrun
||>        printf("Print something %d\n",x[10]);
||>      }                    // problem 2: memory leak -- x not freed
||> #endif
||>
||>    ierr = MPI_Finalize();
||>
||> }
||
||Two things:
||
||(1) rerun the MPI version but with the extra argument -v for valgrind, and
||post the results here.  This will make it possible to see if interception
of
||malloc etc failed for some reason.

The output with -v option for valgrind is included in the output above.

||
||(2) send (in private email) the executable corresponding to the above
||program to me, so I can have a look at the code for main and see if the
||compiler optimised out the test, since the allocation and assignment have
no
||useful side effects.
||
In the output included above, the code does print the value of that memory
location, so I don't think it could have optimized it out.  Also, it was
compiled with no optimization to further prevent this possibility.

As requested I will e-mail the executable in a separate e-mail privately to
you.

Thank you very much!



------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_d2d_jan
_______________________________________________
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to