[ https://issues.apache.org/jira/browse/HDFS-464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12801106#action_12801106 ]
Christian Kunz commented on HDFS-464: ------------------------------------- > 1. hdfs.c - In hdfsConnectAsUserNewInstance90 method jAttrString is not released before subsequent returns. Is this fine? You are entirely correct that it should be released, similar to jAttrString in hdfsConnectAsUser > 2. hdfs.c - In hdfsOpenFile(), jpath, jStrBufferSize, jStrReplication, jStrBlockSize is not release before subsequent return (if (!file) check) The original author must have thought that if you cannot malloc a few bytes you are hosed anyhow. > 3. hdfs.c - In hdfsGetHosts() does blockHosts leak in subsequent return NULL? It is the responsibility of the client to call hdfsFreeHosts, similar to calling hdfsFreeFileInfo after a call to hdfsGetPathInfo. > On a side note, the code seems to be prone to introducing leaks. Seems to me we should have some kind of stack variable that tracks the objects to be deleted and cleans them up when it goes out of scope. One giant step forward would be to replace the libhfds library (which uses JNI) with a socket interface allowing C++ to directly access hdfs servers (would increase performance and reduce memory footprint) :) > Memory leaks in libhdfs > ----------------------- > > Key: HDFS-464 > URL: https://issues.apache.org/jira/browse/HDFS-464 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 0.20.1 > Reporter: Christian Kunz > Assignee: Christian Kunz > Priority: Blocker > Attachments: HADOOP-6034.patch, patch.HADOOP-6034, > patch.HADOOP-6034.0.18 > > > hdfsExists does not call destroyLocalReference for jPath anytime, > hdfsDelete does not call it when it fails, and > hdfsRename does not call it for jOldPath and jNewPath when it fails -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.