>> It is very well possible that openpty itself has a leak.
>> It is quite common that library functions don't clean up nicely after
>> their work is done...
>
> Heh, if that is the case, I might submit a bug to library developer
> then. But it's really common to use openpty in such way, so I wonder if
> I am wrong first.
>
I have never worked with openpty, but as you are not allocating any memory,
and the man-page for openpty doesn't specify any clean-up actions, it seems
quite clear that the leak is openpty's fault.

However, the leak being only 40 bytes, i don't know if the developers
will hurry with fixing this... :)

jody
>> jody
>>
>> On Wed, Apr 14, 2010 at 9:15 AM, Zhenhua Zhang<zhenhua.zh...@intel.com>  
>> wrote:
>>> Hi,
>>>
>>> Valgrind reports definitely lost memory leak when I am invoking
>>> openpty(). Could some one tell me whether it is a bug in valgrind or my
>>> mistake in using openpty? Thanks.
>>>
>>> My program is:
>>> #include<unistd.h>
>>> #include<glib.h>
>>> #include<stdio.h>
>>> #include<stdlib.h>
>>> #include<utmp.h>
>>> #include<pty.h>
>>>
>>> int main(int argc, char **argv)
>>> {
>>>          int master, slave;
>>>
>>>          if (openpty(&master,&slave, NULL, NULL, NULL)<  0)
>>>                return -1;
>>>
>>>          close(master);
>>>          close(slave);
>>>
>>>          return 0;
>>> }
>>>
>>> $ valgrind --leak-check=full ./test
>>> ==25001== Memcheck, a memory error detector
>>> ==25001== Copyright (C) 2002-2009, and GNU GPL'd, by Julian Seward et al.
>>> ==25001== Using Valgrind-3.5.0 and LibVEX; rerun with -h for copyright info
>>> ==25001== Command: ./test
>>> ==25001==
>>> ==25001==
>>> ==25001== HEAP SUMMARY:
>>> ==25001==     in use at exit: 160 bytes in 11 blocks
>>> ==25001==   total heap usage: 67 allocs, 56 frees, 5,594 bytes allocated
>>> ==25001==
>>> ==25001== 160 (40 direct, 120 indirect) bytes in 1 blocks are definitely
>>> lost in loss record 11 of 11
>>> ==25001==    at 0x4024C6C: malloc (vg_replace_malloc.c:195)
>>> ==25001==    by 0x41D8323: nss_parse_service_list (nsswitch.c:622)
>>> ==25001==    by 0x41D8A68: __nss_database_lookup (nsswitch.c:164)
>>> ==25001==    by 0x4672F2B: ???
>>> ==25001==    by 0x4673A24: ???
>>> ==25001==    by 0x418F954: getgrnam_r@@GLIBC_2.1.2 (getXXbyYY_r.c:253)
>>> ==25001==    by 0x41FB2FE: __unix_grantpt (grantpt.c:138)
>>> ==25001==    by 0x41FB588: grantpt (grantpt.c:84)
>>> ==25001==    by 0x40F7021: openpty (openpty.c:102)
>>> ==25001==    by 0x804A0E4: main (test-server2.c:40)
>>> ==25001==
>>> ==25001== LEAK SUMMARY:
>>> ==25001==    definitely lost: 40 bytes in 1 blocks
>>> ==25001==    indirectly lost: 120 bytes in 10 blocks
>>> ==25001==      possibly lost: 0 bytes in 0 blocks
>>> ==25001==    still reachable: 0 bytes in 0 blocks
>>> ==25001==         suppressed: 0 bytes in 0 blocks
>>> ==25001==
>>> ==25001== For counts of detected and suppressed errors, rerun with: -v
>>> ==25001== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 25 from 8)
>>>
>>>
>>>
>>>
>>>
>>> ------------------------------------------------------------------------------
>>> Download Intel&#174; Parallel Studio Eval
>>> Try the new software tools for yourself. Speed compiling, find bugs
>>> proactively, and fine-tune applications for parallel performance.
>>> See why Intel Parallel Studio got high marks during beta.
>>> http://p.sf.net/sfu/intel-sw-dev
>>> _______________________________________________
>>> Valgrind-users mailing list
>>> Valgrind-users@lists.sourceforge.net
>>> https://lists.sourceforge.net/lists/listinfo/valgrind-users
>>>
>>
>> ------------------------------------------------------------------------------
>> Download Intel&#174; Parallel Studio Eval
>> Try the new software tools for yourself. Speed compiling, find bugs
>> proactively, and fine-tune applications for parallel performance.
>> See why Intel Parallel Studio got high marks during beta.
>> http://p.sf.net/sfu/intel-sw-dev
>> _______________________________________________
>> Valgrind-users mailing list
>> Valgrind-users@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/valgrind-users
>>
>
>
> ------------------------------------------------------------------------------
> Download Intel&#174; Parallel Studio Eval
> Try the new software tools for yourself. Speed compiling, find bugs
> proactively, and fine-tune applications for parallel performance.
> See why Intel Parallel Studio got high marks during beta.
> http://p.sf.net/sfu/intel-sw-dev
> _______________________________________________
> Valgrind-users mailing list
> Valgrind-users@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/valgrind-users
>

------------------------------------------------------------------------------
Download Intel&#174; Parallel Studio Eval
Try the new software tools for yourself. Speed compiling, find bugs
proactively, and fine-tune applications for parallel performance.
See why Intel Parallel Studio got high marks during beta.
http://p.sf.net/sfu/intel-sw-dev
_______________________________________________
Valgrind-users mailing list
Valgrind-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/valgrind-users

Reply via email to