Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21948#discussion_r207765500
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/MemorySinkV2Suite.scala
---
@@ -44,16 +46,16 @@ class MemorySinkV2Suite extends StreamTest with
BeforeAndAfter {
val writer = new MemoryStreamWriter(sink, OutputMode.Append(), new
StructType().add("i", "int"))
writer.commit(0,
Array(
- MemoryWriterCommitMessage(0, Seq(InternalRow(1), InternalRow(2))),
- MemoryWriterCommitMessage(1, Seq(InternalRow(3), InternalRow(4))),
- MemoryWriterCommitMessage(2, Seq(InternalRow(6), InternalRow(7)))
+ MemoryWriterCommitMessage(0, Seq(Row(1), Row(2))),
--- End diff --
because the memory sink needs `Row`s at the end. Instead of collecting
`InternalRow`s via copy and then convert to `Row`s, I think it's more efficient
to collect `Row`s directly.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]