zhedoubushishi commented on a change in pull request #4915:
URL: https://github.com/apache/hudi/pull/4915#discussion_r840113380



##########
File path: 
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/TestSqlConf.scala
##########
@@ -85,6 +87,15 @@ class TestSqlConf extends TestHoodieSqlBase with 
BeforeAndAfter {
         s"$tablePath/" + HoodieTableMetaClient.METAFOLDER_NAME,
         HoodieTableConfig.PAYLOAD_CLASS_NAME.defaultValue).getTableType)
 
+      // Manually pass incremental configs to global configs to make sure Hudi 
query is able to load the
+      // global configs
+      DFSPropertiesConfiguration.addToGlobalProps(QUERY_TYPE.key, 
QUERY_TYPE_INCREMENTAL_OPT_VAL)

Review comment:
       > @zhedoubushishi : whats the scope of these configs. Is it the full 
spark-sql or spark-shell session ? also, users have to remember to remove them 
if for 2nd query they don't need these configs right?
   
   It would be the cluster level which means each query would have this option 
by default. 
   users have to remember to remove them if for 2nd query they don't need these 
configs right? - yes you are right.
   
   Actually here I used a bad example, practically we shouldn't set 
`QUERY_TYPE` inside the external config file. 
   
   This change is aim to fix the situation when customer enabled metadata table 
in the external file. Then for each query, they should query with metadata 
table enabled by default.
   
   But it's kind of difficult to test the above scenario, so I came up with 
this bad example.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to