[
https://issues.apache.org/jira/browse/HADOOP-4635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12651251#action_12651251
]
Hadoop QA commented on HADOOP-4635:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12394690/HADOOP-4635.txt.0.19
against trunk revision 720930.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 3 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
-1 core tests. The patch failed core unit tests.
+1 contrib tests. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3657/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3657/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3657/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/3657/console
This message is automatically generated.
> Memory leak ?
> -------------
>
> Key: HADOOP-4635
> URL: https://issues.apache.org/jira/browse/HADOOP-4635
> Project: Hadoop Core
> Issue Type: Bug
> Components: contrib/fuse-dfs
> Affects Versions: 0.19.0, 0.20.0
> Reporter: Marc-Olivier Fleury
> Assignee: Pete Wyckoff
> Priority: Blocker
> Fix For: 0.18.3, 0.19.1
>
> Attachments: HADOOP-4635.txt.0.18, HADOOP-4635.txt.0.19,
> patch-hadoop4635.test_18, patch-hadoop4635.test_19, TEST-TestFuseDFS.txt-4635
>
>
> I am running a process that needs to crawl a tree structure containing ~10K
> images, copy the images to the local disk, process these images, and copy
> them back to HDFS.
> My problem is the following : after about 10h of processing, the processes
> crash, complaining about a std::bad_alloc exception (I use hadoop pipes to
> run existing software). When running fuse_dfs in debug mode, I get an
> outOfMemoryError, telling that there is no more room in the heap.
> While the process is running, using top or ps, I notice that fuse is using up
> an increasing amount of memory, until some limit is reached. At that point ,
> the memory used is oscillating. I suppose that this is due to the use of the
> virtual memory.
> This leads me to the conclusion that there is some memory leak in fuse_dfs,
> since the only other programs running are Hadoop and the existing software,
> both thoroughly tested in the past.
> My problem is that my knowledge concerning memory leak tracking is rather
> limited, so I will need some instructions to get more insight concerning this
> issue.
> Thank you
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.