MaxNevermind commented on code in PR #44636:
URL: https://github.com/apache/spark/pull/44636#discussion_r1449776458


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala:
##########
@@ -166,6 +186,23 @@ class FileStreamSource(
         // implies "sourceOptions.latestFirst = true" which we want to refresh 
the list per batch
         (newFiles.take(files.maxFiles()), null)
 
+      case files: ReadMaxBytes if !sourceOptions.latestFirst =>
+        // we can cache and reuse remaining fetched list of files in further 
batches
+        val (bFiles, usFiles) = takeFilesUntilMax(newFiles, files.maxBytes())
+        if (usFiles.map(_.size).sum < files.maxBytes() * 
DISCARD_UNSEEN_FILES_RATIO) {

Review Comment:
   I introduced `usFilesSize` that computed using `Math.addExact`. Let me know 
if that works. The reason I would like to leave this this way without modifying 
`takeFilesUntilMax` is because I want to preserve the model of `val (bFiles, 
usFiles) = ...` computation similar to existing `ReadMaxFiles` few lines above 
as it is pretty clean and simple to understand and reason about. Let me know 
what you think.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to