Comment #6 on issue 16093 by daniel.r.kegel: Memory leaks in  
net::HostResolver
http://code.google.com/p/chromium/issues/detail?id=16093

I'm seeing hundreds of instances of this today on my home machine;
in a single run though all ui_tests, it happens 99 or 101 times.

Perhaps it happens only when the network is flaky.
The stack is slightly different now, so it evades the earlier suppression:

28 bytes in 1 blocks are definitely lost in loss record 446 of 1,452
    at 0x7EA2BFF: malloc (vg_replace_malloc.c:193)
    by 0x10683B12: ???
    by 0x106814B5: ???
    by 0x10681B1A: ???
    by 0x10681D14: ???
    by 0x1067221D: ???
    by 0x106724CA: ???
    by 0xDAFE601: gethostbyname2_r@@GLIBC_2.1.2 (getXXbyYY_r.c:253)
    by 0xDAC6FA9: gaih_inet (getaddrinfo.c:531)
    by 0xDAC915E: getaddrinfo (getaddrinfo.c:2154)
    by 0x9248146: net::SystemHostResolverProc(std::string const&,  
net::AddressList*)
(host_resolver_proc.cc:161)
    by 0x9244885: net::ResolveAddrInfo(net::HostResolverProc*, std::string  
const&,
net::AddressList*) (host_resolver_impl.cc:48)
    by 0x9246519: net::HostResolverImpl::Job::DoLookup()  
(host_resolver_impl.cc:204)

It's possible this is a leak inside glibc itself, who knows.
(An easy one with a similar stack was fixed long ago; see
https://bugzilla.redhat.com/show_bug.cgi?id=116526 )

But it's less likely that valgrind is screwing up.  It distinguishes
between blocks that are still reachable, and those that aren't.
If Valgrind says something is really leaked, I think it means
there aren't any pointers to it in memory anywhere anymore.


--
You received this message because you are listed in the owner
or CC fields of this issue, or because you starred this issue.
You may adjust your issue notification preferences at:
http://code.google.com/hosting/settings

--~--~---------~--~----~------------~-------~--~----~
Automated mail from issue updates at http://crbug.com/
Subscription options: http://groups.google.com/group/chromium-bugs
-~----------~----~----~----~------~----~------~--~---

Reply via email to