[
https://issues.apache.org/jira/browse/SPARK-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989566#comment-13989566
]
Thomas Graves commented on SPARK-1676:
--------------------------------------
https://github.com/apache/spark/pull/621
> HDFS FileSystems continually pile up in the FS cache
> ----------------------------------------------------
>
> Key: SPARK-1676
> URL: https://issues.apache.org/jira/browse/SPARK-1676
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.0.0, 0.9.1
> Reporter: Aaron Davidson
> Assignee: Thomas Graves
> Priority: Critical
>
> Due to HDFS-3545, FileSystem.get() always produces (and caches) a new
> FileSystem when provided with a new UserGroupInformation (UGI), even if the
> UGI represents the same user as another UGI. This causes a buildup of
> FileSystem objects at an alarming rate, often one per task for something like
> sc.textFile(). The bug is especially hard-hitting for NativeS3FileSystem,
> which also maintains an open connection to S3, clogging up the system file
> handles.
> The bug was introduced in https://github.com/apache/spark/pull/29, where doAs
> was made the default behavior.
> A fix is not forthcoming for the general case, as UGIs do not cache well, but
> this problem can lead to spark clusters entering into a failed state and
> requiring executors be restarted.
--
This message was sent by Atlassian JIRA
(v6.2#6252)