[jira] [Assigned] (SPARK-17372) Running a file stream on a directory with partitioned subdirs throw NotSerializableException/StackOverflowError
[ https://issues.apache.org/jira/browse/SPARK-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-17372: Assignee: Tathagata Das (was: Apache Spark) > Running a file stream on a directory with partitioned subdirs throw > NotSerializableException/StackOverflowError > --- > > Key: SPARK-17372 > URL: https://issues.apache.org/jira/browse/SPARK-17372 > Project: Spark > Issue Type: Sub-task > Components: SQL, Streaming >Affects Versions: 2.0.0 >Reporter: Tathagata Das >Assignee: Tathagata Das > > Here is the result of my investigation. When we create a filestream on a > directory that has partitioned subdirs (i.e. dir/x=y/), then > ListingFileCatalog.allFiles returns the files in the dir as Seq[String] which > internally is a Stream[String]. This is because of this > [line|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala#L93], > where a LinkedHashSet.values.toSeq returns Stream. Then when the > [FileStreamSource|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala#L79] > filters this Stream[String] to remove the seen files, it creates a new > Stream[String], which has a filter function that has a $outer reference to > the FileStreamSource (in Scala 2.10). Trying to serialize this Stream[String] > causes NotSerializableException. This will happened even if there is just one > file in the dir. > Its important to note that this behavior is different in Scala 2.11. There is > no $outer reference to FileStreamSource, so it does not throw > NotSerializableException. However, with a large sequence of files (tested > with 5000 files), it throws StackOverflowError. This is because how Stream > class is implemented. Its basically like a linked list, and attempting to > serialize a long Stream requires *recursively* going through linked list, > thus resulting in StackOverflowError. > The right solution is to convert the seq to an array before writing to the > log. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-17372) Running a file stream on a directory with partitioned subdirs throw NotSerializableException/StackOverflowError
[ https://issues.apache.org/jira/browse/SPARK-17372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-17372: Assignee: Apache Spark (was: Tathagata Das) > Running a file stream on a directory with partitioned subdirs throw > NotSerializableException/StackOverflowError > --- > > Key: SPARK-17372 > URL: https://issues.apache.org/jira/browse/SPARK-17372 > Project: Spark > Issue Type: Sub-task > Components: SQL, Streaming >Affects Versions: 2.0.0 >Reporter: Tathagata Das >Assignee: Apache Spark > > Here is the result of my investigation. When we create a filestream on a > directory that has partitioned subdirs (i.e. dir/x=y/), then > ListingFileCatalog.allFiles returns the files in the dir as Seq[String] which > internally is a Stream[String]. This is because of this > [line|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala#L93], > where a LinkedHashSet.values.toSeq returns Stream. Then when the > [FileStreamSource|https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala#L79] > filters this Stream[String] to remove the seen files, it creates a new > Stream[String], which has a filter function that has a $outer reference to > the FileStreamSource (in Scala 2.10). Trying to serialize this Stream[String] > causes NotSerializableException. This will happened even if there is just one > file in the dir. > Its important to note that this behavior is different in Scala 2.11. There is > no $outer reference to FileStreamSource, so it does not throw > NotSerializableException. However, with a large sequence of files (tested > with 5000 files), it throws StackOverflowError. This is because how Stream > class is implemented. Its basically like a linked list, and attempting to > serialize a long Stream requires *recursively* going through linked list, > thus resulting in StackOverflowError. > The right solution is to convert the seq to an array before writing to the > log. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org