Zouxxyy commented on code in PR #7793:
URL: https://github.com/apache/hudi/pull/7793#discussion_r1092733332
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/TestInsertTable.scala:
##########
@@ -433,122 +433,22 @@ class TestInsertTable extends HoodieSparkSqlTestBase {
""".stripMargin)
checkAnswer(s"select id, name, price, ts, dt from $tableName " +
s"where dt >='2021-01-04' and dt <= '2021-01-06' order by id,dt")(
- Seq(2, "a2", 12.0, 1000, "2021-01-05"),
- Seq(2, "a2", 10.0, 1000, "2021-01-06"),
Seq(3, "a1", 10.0, 1000, "2021-01-04")
)
- // test insert overwrite non-partitioned table
+ // Test insert overwrite non-partitioned table
spark.sql(s"insert overwrite table $tblNonPartition select 2, 'a2', 10,
1000")
checkAnswer(s"select id, name, price, ts from $tblNonPartition")(
Seq(2, "a2", 10.0, 1000)
)
- })
- }
- test("Test Insert Overwrite Table for V2 Table") {
- withSQLConf("hoodie.schema.on.read.enable" -> "true") {
Review Comment:
Year, I noticed that you added this config to force the use of the V2 table,
but I think in the future, hudi spark3 may use v2 by default instead of being
controlled by this config.
Beside, v1 table can also distinguish insert overwrite partition and insert
overwrite table by checking partitionSpec is empty or not, so I think the test
should be uniform.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]