Github user rajat-agarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/537#discussion_r14324543
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -97,6 +98,23 @@ class FileInputDStream[K: ClassTag, V: ClassTag, F <:
NewInputFormat[K,V] : Clas
}
/**
+ * Find files recursively in a directory
+ */
+ private def recursiveFileList(
+ fileStatuses: List[FileStatus],
+ paths: List[Path] = List[Path]()
+ ): List[Path] = fileStatuses match {
+
+ case f :: tail if (fs.getContentSummary(f.getPath).getDirectoryCount >
1) =>
+ recursiveFileList(fs.listStatus(f.getPath).toList ::: tail, paths)
+ case f :: tail if f.isDir => recursiveFileList(tail, f.getPath ::
paths)
+ case f :: tail => recursiveFileList(tail, paths)
+ case _ => paths
+
+ }
+
--- End diff --
If the given directory contains both files and directories then the files
will be ignored and only lowest level of subdirectories will be considered.
Example:
If the input directory has following structure.
/input/directory/\<files>
/input/directory/\<directories>
And if the data that to be considered is available in files in
"/input/directory/" then it will be ignored.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---