HyukjinKwon commented on a change in pull request #25398: [SPARK-28659][SQL] 
Use data source if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#discussion_r333802719
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala
 ##########
 @@ -1396,6 +1396,16 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder(conf) {
       compressed = false,
       properties = rowStorage.properties ++ fileStorage.properties)
 
-    (ctx.LOCAL != null, storage, Some(DDLUtils.HIVE_PROVIDER))
+    val fileFormat = extractFileFormat(fileStorage.serde)
+    (ctx.LOCAL != null, storage, Some(fileFormat))
+  }
+
+  private def extractFileFormat(serde: Option[String]): String = {
 
 Review comment:
   @Udbhav30, I more meant this seems not the right place to replace. You 
should add a configuration, and such replacement should be done in 
analysis/optimizing, not in the parser.
   
   If that's the case, then why don't you use `USING file_format` explicitly? 
Can you please describe what this PR target to fix clearly?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to