yihua commented on code in PR #12798:
URL: https://github.com/apache/hudi/pull/12798#discussion_r1964565164
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/dml/TestMergeIntoTable.scala:
##########
@@ -73,7 +73,7 @@ class TestMergeIntoTable extends HoodieSparkSqlTestBase with
ScalaAssertionSuppo
| ) s0
| on s0.id = $tableName.id
| when matched and flag = '1' then update set
- | id = s0.id, name = s0.name, price = s0.price, ts = s0.ts
+ | id = s0.id, name = s0.name, price = s0.price, ts = s0.ts + 1
Review Comment:
Usually the precombine field should not be changed in SQL.
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/ddl/TestAlterTable.scala:
##########
@@ -49,7 +49,7 @@ class TestAlterTable extends HoodieSparkSqlTestBase {
| id int,
| name string,
| price double,
- | ts long
+ | ts int
Review Comment:
Similar here and other test classes.
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/dml/TestMergeIntoTable.scala:
##########
@@ -1373,45 +1373,6 @@ class TestMergeIntoTable extends HoodieSparkSqlTestBase
with ScalaAssertionSuppo
spark.sql(s"insert into $tableName values(1, 'a1', 10, 1000)")
- // Can't down-cast incoming dataset's primary-key w/o loss of precision
(should fail)
- val errorMsg = "Invalid MERGE INTO matching condition: s0.id: can't cast
s0.id (of LongType) to IntegerType"
-
- checkExceptionContain(
- s"""
- |merge into $tableName h0
- |using (
- | select cast(1 as long) as id, 1001 as ts
- | ) s0
- | on cast(h0.id as long) = s0.id
- | when matched then update set h0.ts = s0.ts
- |""".stripMargin)(errorMsg)
-
- // Can't down-cast incoming dataset's primary-key w/o loss of precision
(should fail)
- checkExceptionContain(
- s"""
- |merge into $tableName h0
- |using (
- | select cast(1 as long) as id, 1002 as ts
- | ) s0
- | on h0.id = s0.id
- | when matched then update set h0.ts = s0.ts
- |""".stripMargin)(errorMsg)
-
- // Can up-cast incoming dataset's primary-key w/o loss of precision
(should succeed)
- spark.sql(
- s"""
- |merge into $tableName h0
- |using (
- | select cast(1 as short) as id, 1003 as ts
- | ) s0
- | on h0.id = s0.id
- | when matched then update set h0.ts = s0.ts
- |""".stripMargin)
-
- checkAnswer(s"select id, name, value, ts from $tableName")(
- Seq(1, "a1", 10, 1003)
- )
Review Comment:
Are these cases already covered so they are removed?
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/dml/TestMergeIntoTable2.scala:
##########
@@ -1099,7 +1099,7 @@ class TestMergeIntoTable2 extends HoodieSparkSqlTestBase {
}
// Test 2: At least one partial insert assignment clause misses
primary key.
- checkException(
+ checkExceptionContain(
Review Comment:
What's reason of changing `checkException` to `checkExceptionContain`?
##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/dml/TestMergeModeCommitTimeOrdering.scala:
##########
@@ -242,7 +242,7 @@ class TestMergeModeCommitTimeOrdering extends
HoodieSparkSqlTestBase {
| id int,
| name string,
| price double,
- | ts long
+ | ts int
Review Comment:
Similar here on avoiding type changes on the precombine field and fix MERGE
INTO statements if needed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]