Hello Eric,

Thank you for taking a quick look at the files. I have run ms_print myself
and I have the graphs as well. I have also used Massif-Visualizer, built it
with a lot of effort on Fedora. I can clearly see that the new code is
performing considerably well. If you look at the heap usage in the new code,
there is a constant usage of about 5MB and 2 peaks in between at 274MB and
15 MB.

The old code has a constant heap usage of 800 MB. and a peak of 1GB. Given
all this, what confuses me is the fact that memcheck says that the memory
allocated in the case of old code is 26GB and 25GB in the case of new code
(Ref: Memcheck summary which I had attached with my previous email.)

Why does memcheck report only a 1GB of savings when the total heap usage is
much much lesser?

Also, I have ran these tests with Google Performance tools as well. The old
code reports a usage/allocation of 23000 MB and the new 600 MB.

About mailing you the codes, I will have to clean up certain parts before I
can send it to you, because it uses some licensed code which we cannot
release to public.

I was wondering if you would be interested in the Google Perf profiles as
well.

Thank you,
Mahesh Narayanamurthi

On Fri, Sep 23, 2011 at 6:32 PM, Eric Schwarz <[email protected]>wrote:

>  Hi Mahesh
>
>
> Based on the files you provided I can tell you:
>
> 1.) You have a memory leak
>
> ==16673== 54,768 (36,512 direct, 18,256 indirect) bytes in 1,141 blocks are
> definitely lost in loss record 112 of 120
> ==16673==    at 0x4A05E46: malloc (vg_replace_malloc.c:195)
> ==16673==    by 0x418037: Elm (code.c:367)
> ==16673==    by 0x41201D: GenerateSolve (gen.c:1800)
> ==16673==    by 0x416A5B: Generate (gen.c:3180)
> ==16673==    by 0x40AA73: main (kpp.c:560)
>
> 2.) To visualize data from Valgrind's Massif so as to get more insights
> ms_print [1] which comes together w/ Valgrind may be used.
>
> Please find the output of ms_print for your two provided Massif files
> attached.
>
> Another possibility to configure Valgrind and visualize its output very
> comfortable is the Linux Tools plug-in [2] for Eclipse.
>
>
> Hope this helps.
>
>
> Best regards
> Eric
>
> [1]...http://valgrind.org/docs/manual/ms-manual.html
> [2]...http://www.eclipse.org/linuxtools/projectPages/valgrind/
>
>
>
> Am 23.09.2011 19:28, schrieb Mahesh N:
>
> Hello Eric,
>
>
> Thank you for volunteering to take a look. I have attached the files.
>
> Basically, I have 2 versions of a code which does heavy numerical
> computations. The old version of the code was modified to consume lesser
> memory by using sparse structures in the code.
>
> The first one is the output summary from running Valgrind Memcheck on both
> the old and the new codes.
>
> The second is the output from massif when heap-profiling the new code
>
> The third is the output from massif when heap-profiling the old code.
>
>
> Thanks,
> Mahesh Narayanamurthi
>
> On Fri, Sep 23, 2011 at 1:06 PM, Eric Schwarz <[email protected]>wrote:
>
>>  Hi
>>
>>
>> Is the attachment missing?
>>
>>
>> Regards
>> Eric
>>
>> Am 23.09.2011 18:15, schrieb Mahesh N:
>>
>>  Hello,
>>
>> I have questions regarding the interpretation of output from Memcheck and
>> Massif. I was wondering if someone could help me interpret the output files.
>> I am not sure I know how to correlate both the outputs.
>>
>> Thank you,
>> Mahesh Narayanamurthi
>>
>> --
>> I am a normally distributed Random Variable with mean N and variance M
>>
>>
>>
>>  
>> ------------------------------------------------------------------------------
>> All of the data generated in your IT infrastructure is seriously valuable.
>> Why? It contains a definitive record of application performance, security
>> threats, fraudulent activity, and more. Splunk takes this data and makes
>> sense of it. IT sense. And common sense.http://p.sf.net/sfu/splunk-d2dcopy2
>>
>>
>>
>> _______________________________________________
>> Valgrind-users mailing 
>> [email protected]https://lists.sourceforge.net/lists/listinfo/valgrind-users
>>
>>
>>
>
>
> --
> I am a normally distributed Random Variable with mean N and variance M
>
>
>


-- 
I am a normally distributed Random Variable with mean N and variance M
------------------------------------------------------------------------------
All of the data generated in your IT infrastructure is seriously valuable.
Why? It contains a definitive record of application performance, security
threats, fraudulent activity, and more. Splunk takes this data and makes
sense of it. IT sense. And common sense.
http://p.sf.net/sfu/splunk-d2dcopy2
_______________________________________________
Valgrind-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to