Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r165249764
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F <: NewInputFormat[K, V]](
val metadata = Map(
"files" -> newFiles.toList,
StreamInputInfo.METADATA_KEY_DESCRIPTION -> newFiles.mkString("\n"))
- val inputInfo = StreamInputInfo(id, 0, metadata)
+ val inputInfo = StreamInputInfo(id, rdds.map(_.count).sum, metadata)
--- End diff --
I'm not in favor of such changes. No matter the process is sync or async,
because `reportInfo` is invoked here, so you have to wait for the process to
end.
Anyway I think reading twice is unacceptable for streaming scenario (even
for batch scenario). I guess the previous code set to "0" by intention.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]