wzx140 commented on code in PR #7003:
URL: https://github.com/apache/hudi/pull/7003#discussion_r1025403544


##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/HoodieSparkSqlTestBase.scala:
##########
@@ -170,4 +174,44 @@ class HoodieSparkSqlTestBase extends FunSuite with 
BeforeAndAfterAll {
     val fs = FSUtils.getFs(filePath, spark.sparkContext.hadoopConfiguration)
     fs.exists(path)
   }
+
+  protected def withSQLConf(pairs: (String, String)*)(f: => Unit): Unit = {
+    val conf = spark.sessionState.conf
+    val currentValues = pairs.unzip._1.map { k =>
+      if (conf.contains(k)) {
+        Some(conf.getConfString(k))
+      } else None
+    }
+    pairs.foreach { case(k, v) => conf.setConfString(k, v) }
+    try f finally {
+      pairs.unzip._1.zip(currentValues).foreach {
+        case (key, Some(value)) => conf.setConfString(key, value)
+        case (key, None) => conf.unsetConf(key)
+      }
+    }
+  }
+
+  protected def withRecordType(f: => Unit, recordConfig: Map[HoodieRecordType, 
Map[String, String]]=Map.empty) {

Review Comment:
   In most test case, we do not need to pass different sql configs for 
recordType independently. So I used the default parameters in function .
   
   In "Test Insert Into None Partitioned Table", SparkMerger should use 
"HoodieSparkValidateDuplicateKeyRecordMerger" instead of 
"hoodie.sql.insert.mode=strict". 
   
   
   We should not replace SparkRecordMerger with 
HoodieSparkValidateDuplicateKeyRecordMerger when user set 
"hoodie.sql.insert.mode=strict". Because sql cofig has the higher priority. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to