[ 
https://issues.apache.org/jira/browse/HDFS-12984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16324189#comment-16324189
 ] 

Hudson commented on HDFS-12984:
-------------------------------

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13485 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13485/])
HDFS-12984. BlockPoolSlice can leak in a mini dfs cluster. Contributed (arp: 
rev b278f7b29305cb67d22ef0bb08b067c422381f48)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java


> BlockPoolSlice can leak in a mini dfs cluster
> ---------------------------------------------
>
>                 Key: HDFS-12984
>                 URL: https://issues.apache.org/jira/browse/HDFS-12984
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.5
>            Reporter: Robert Joseph Evans
>            Assignee: Ajay Kumar
>             Fix For: 3.1.0
>
>         Attachments: HDFS-12984.001.patch, Screen Shot 2018-01-05 at 4.38.06 
> PM.png, Screen Shot 2018-01-05 at 5.26.54 PM.png, Screen Shot 2018-01-05 at 
> 5.31.52 PM.png
>
>
> When running some unit tests for storm we found that we would occasionally 
> get out of memory errors on the HDFS integration tests.
> When I got a heap dump I found that the ShutdownHookManager was full of 
> BlockPoolSlice$1 instances.  Which hold a reference to the BlockPoolSlice 
> which then in turn holds a reference to the DataNode etc....
> It looks like when shutdown is called on the BlockPoolSlice there is no way 
> to remove the shut down hook in because no reference to it is saved.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to