MaxNevermind commented on code in PR #44636:
URL: https://github.com/apache/spark/pull/44636#discussion_r1449772885
##########
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala:
##########
@@ -1154,25 +1154,33 @@ class FileStreamSourceSuite extends
FileStreamSourceTest {
}
test("max files per trigger") {
+ testThresholdLogic("maxFilesPerTrigger")
+ }
+
+ test("SPARK-46641: max bytes per trigger") {
+ testThresholdLogic("maxBytesPerTrigger")
+ }
+
+ private def testThresholdLogic(option: String): Unit = {
withTempDir { case src =>
var lastFileModTime: Option[Long] = None
/** Create a text file with a single data item */
- def createFile(data: Int): File = {
- val file = stringToFile(new File(src, s"$data.txt"), data.toString)
+ def createFile(data: String): File = {
Review Comment:
The reason is that I was trying to reuse original code as much as possible
and in original version of the code we write not just 1 char strings but also 2
char strings like 10, 11, 12. That will break the logic for maxBytesPerTrigger
= 1. So the idea behind the replacement was to make all files of 1 byte size by
switching to writing chars('a' to 'l') instead of numbers(1 to 12). Let's me
know what you think about it.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]