[
https://issues.apache.org/jira/browse/HDFS-596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Zhang Bingjun updated HDFS-596:
-------------------------------
Description:
This bugs affects fuse-dfs severely. In my test, about 1GB memory were
exhausted and the fuse-dfs mount directory were disconnected after writing
14000 files. This bug is related to the memory leak problem of this issue:
http://issues.apache.org/jira/browse/HDFS-420.
The bug can be fixed very easily. In function hdfsFreeFileInfo() in file hdfs.c
(under c++/libhdfs/) change code block:
//Free the mName
int i;
for (i=0; i < numEntries; ++i) {
if (hdfsFileInfo[i].mName) {
free(hdfsFileInfo[i].mName);
}
}
into:
// free mName, mOwner and mGroup
int i;
for (i=0; i < numEntries; ++i) {
if (hdfsFileInfo[i].mName) {
free(hdfsFileInfo[i].mName);
}
if (hdfsFileInfo[i].mOwner){
free(hdfsFileInfo[i].mOwner);
}
if (hdfsFileInfo[i].mGroup){
free(hdfsFileInfo[i].mGroup);
}
}
I am new to Jira and haven't figured out a way to generate .patch file yet.
Could anyone help me do that so that others can commit the changes into the
code base. Thanks!
was:
This bugs affects fuse-dfs severely. In my test, about 1GB memory were
exhausted and the fuse-dfs mount directory were disconnected after writing
14000 files.
The bug can be fixed very easily. In function hdfsFreeFileInfo() in file hdfs.c
(under c++/libhdfs/) change code block:
//Free the mName
int i;
for (i=0; i < numEntries; ++i) {
if (hdfsFileInfo[i].mName) {
free(hdfsFileInfo[i].mName);
}
}
into:
// free mName, mOwner and mGroup
int i;
for (i=0; i < numEntries; ++i) {
if (hdfsFileInfo[i].mName) {
free(hdfsFileInfo[i].mName);
}
if (hdfsFileInfo[i].mOwner){
free(hdfsFileInfo[i].mOwner);
}
if (hdfsFileInfo[i].mGroup){
free(hdfsFileInfo[i].mGroup);
}
}
I am new to Jira and haven't figured out a way to generate .patch file yet.
Could anyone help me do that so that others can commit the changes into the
code base. Thanks!
> Memory leak in libhdfs: hdfsFreeFileInfo() in libhdfs does not free memory
> for mOwner and mGroup
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-596
> URL: https://issues.apache.org/jira/browse/HDFS-596
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: contrib/fuse-dfs
> Affects Versions: 0.20.1
> Environment: Linux hadoop-001 2.6.28-14-server #47-Ubuntu SMP Sat Jul
> 25 01:18:34 UTC 2009 i686 GNU/Linux. Namenode with 1GB memory.
> Reporter: Zhang Bingjun
> Priority: Critical
> Fix For: 0.20.1
>
> Original Estimate: 0.5h
> Remaining Estimate: 0.5h
>
> This bugs affects fuse-dfs severely. In my test, about 1GB memory were
> exhausted and the fuse-dfs mount directory were disconnected after writing
> 14000 files. This bug is related to the memory leak problem of this issue:
> http://issues.apache.org/jira/browse/HDFS-420.
> The bug can be fixed very easily. In function hdfsFreeFileInfo() in file
> hdfs.c (under c++/libhdfs/) change code block:
> //Free the mName
> int i;
> for (i=0; i < numEntries; ++i) {
> if (hdfsFileInfo[i].mName) {
> free(hdfsFileInfo[i].mName);
> }
> }
> into:
> // free mName, mOwner and mGroup
> int i;
> for (i=0; i < numEntries; ++i) {
> if (hdfsFileInfo[i].mName) {
> free(hdfsFileInfo[i].mName);
> }
> if (hdfsFileInfo[i].mOwner){
> free(hdfsFileInfo[i].mOwner);
> }
> if (hdfsFileInfo[i].mGroup){
> free(hdfsFileInfo[i].mGroup);
> }
> }
> I am new to Jira and haven't figured out a way to generate .patch file yet.
> Could anyone help me do that so that others can commit the changes into the
> code base. Thanks!
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.