[
https://issues.apache.org/jira/browse/HADOOP-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15097706#comment-15097706
]
sunhaitao commented on HADOOP-12707:
------------------------------------
Hi Chris Nauroth,Many thanks for your reply, this two option can solve the
problem.
For the current design, even if the same user ,if the user calls this method
twice then the filesystem ojbect will be created twice.
The CACHE can't have any effect except buffers a lot of unreferenced filesystem
object.
Isn't there any plan to fix this?
> key of FileSystem inner class Cache contains UGI.hascode which uses the
> defualt hascode method, leading to the memory leak
> --------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-12707
> URL: https://issues.apache.org/jira/browse/HADOOP-12707
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs
> Affects Versions: 2.7.1
> Reporter: sunhaitao
> Assignee: sunhaitao
>
> FileSystem.get(conf) method,By default it will get the fs object from
> CACHE,But the key of the CACHE constains ugi.hashCode, which uses the
> default hascode method of subject instead of the hascode method overwritten
> by subject.
> @Override
> public int hashCode() {
> return (scheme + authority).hashCode() + ugi.hashCode() + (int)unique;
> }
> In this case, even if same user, if the calll FileSystem.get(conf) twice, two
> different key will be created. In long duartion, this will lead to memory
> leak.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)