Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/9143#discussion_r44240396
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/util/WriteAheadLogSuite.scala
---
@@ -58,49 +71,127 @@ class WriteAheadLogSuite extends SparkFunSuite with
BeforeAndAfter {
Utils.deleteRecursively(tempDir)
}
- test("WriteAheadLogUtils - log selection and creation") {
- val logDir = Utils.createTempDir().getAbsolutePath()
+ test(testPrefix + "read all logs") {
+ // Write data manually for testing reading through WriteAheadLog
+ val writtenData = (1 to 10).map { i =>
+ val data = generateRandomData()
+ val file = testDir + s"/log-$i-$i"
+ writeDataManually(data, file)
--- End diff --
It's very important that the BatchedWAL can recover when the existing data
is not batched as well, right? I mean, isn't it possible that some users start
their stream using Spark 1.5. Then with the release, they want to use the
BatchedWAL, therefore they enable it. All the existing data in the checkpoint
directory is not batched, but the BatchedWAL should be able to recover from it
still
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]