Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/22009#discussion_r208098178
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/sources/memoryV2.scala
---
@@ -132,35 +134,15 @@ class MemoryV2CustomMetrics(sink: MemorySinkV2)
extends CustomMetrics {
override def json(): String = Serialization.write(Map("numRows" ->
sink.numRows))
}
-class MemoryWriter(sink: MemorySinkV2, batchId: Long, outputMode:
OutputMode, schema: StructType)
--- End diff --
this is actually a batch writer not micro-batch, and is only used in the
test. For writer API, micro-batch and continuous share the same interface, so
we only need one streaming write implementation.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]