Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21948#discussion_r207723998
  
    --- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/MemorySinkV2Suite.scala
 ---
    @@ -44,16 +46,16 @@ class MemorySinkV2Suite extends StreamTest with 
BeforeAndAfter {
         val writer = new MemoryStreamWriter(sink, OutputMode.Append(), new 
StructType().add("i", "int"))
         writer.commit(0,
           Array(
    -        MemoryWriterCommitMessage(0, Seq(InternalRow(1), InternalRow(2))),
    -        MemoryWriterCommitMessage(1, Seq(InternalRow(3), InternalRow(4))),
    -        MemoryWriterCommitMessage(2, Seq(InternalRow(6), InternalRow(7)))
    +        MemoryWriterCommitMessage(0, Seq(Row(1), Row(2))),
    --- End diff --
    
    Now the `DataWriter` needs to copy the input row before buffering it, which 
can be done by the `RowEncoder` when converting `InternalRow` to `Row`. Then 
the write message carries `Row`s to the driver side.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to