[ https://issues.apache.org/jira/browse/SPARK-1676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Thomas Graves resolved SPARK-1676. ---------------------------------- Resolution: Fixed Fix Version/s: 0.9.2 1.0.0 > HDFS FileSystems continually pile up in the FS cache > ---------------------------------------------------- > > Key: SPARK-1676 > URL: https://issues.apache.org/jira/browse/SPARK-1676 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.0.0, 0.9.1 > Reporter: Aaron Davidson > Assignee: Thomas Graves > Priority: Critical > Fix For: 1.0.0, 0.9.2 > > > Due to HDFS-3545, FileSystem.get() always produces (and caches) a new > FileSystem when provided with a new UserGroupInformation (UGI), even if the > UGI represents the same user as another UGI. This causes a buildup of > FileSystem objects at an alarming rate, often one per task for something like > sc.textFile(). The bug is especially hard-hitting for NativeS3FileSystem, > which also maintains an open connection to S3, clogging up the system file > handles. > The bug was introduced in https://github.com/apache/spark/pull/29, where doAs > was made the default behavior. > A fix is not forthcoming for the general case, as UGIs do not cache well, but > this problem can lead to spark clusters entering into a failed state and > requiring executors be restarted. -- This message was sent by Atlassian JIRA (v6.2#6252)