gengliangwang commented on a change in pull request #24327: [SPARK-27418][SQL] 
Migrate Parquet to File Data Source V2
URL: https://github.com/apache/spark/pull/24327#discussion_r294307678
 
 

 ##########
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala
 ##########
 @@ -218,9 +218,12 @@ class StreamSuite extends StreamTest {
       }
     }
 
-    val df = spark.readStream.format(classOf[FakeDefaultSource].getName).load()
-    assertDF(df)
-    assertDF(df)
+    // TODO: fix file source V2 as well.
+    withSQLConf(SQLConf.USE_V1_SOURCE_READER_LIST.key -> "parquet") {
+      val df = 
spark.readStream.format(classOf[FakeDefaultSource].getName).load()
 
 Review comment:
   ```
   [info]   Decoded objects do not match expected objects:
   [info]   expected: WrappedArray(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
   [info]   actual:   WrappedArray(9, 0, 10, 1, 2, 8, 3, 6, 7, 5, 4)
   [info]   assertnotnull(upcast(getcolumnbyordinal(0, LongType), LongType, - 
root class: "scala.Long"))
   [info]   +- upcast(getcolumnbyordinal(0, LongType), LongType, - root class: 
"scala.Long")
   [info]      +- getcolumnbyordinal(0, LongType) (QueryTest.scala:70)
   ```
   We need to fix the read path of steaming output.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to