Udbhav30 commented on a change in pull request #25398: [SPARK-28659][SQL] Use 
data source if convertible in insert overwrite directory
URL: https://github.com/apache/spark/pull/25398#discussion_r333874882
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala
 ##########
 @@ -1396,6 +1396,16 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder(conf) {
       compressed = false,
       properties = rowStorage.properties ++ fileStorage.properties)
 
-    (ctx.LOCAL != null, storage, Some(DDLUtils.HIVE_PROVIDER))
+    val fileFormat = extractFileFormat(fileStorage.serde)
+    (ctx.LOCAL != null, storage, Some(fileFormat))
+  }
+
+  private def extractFileFormat(serde: Option[String]): String = {
 
 Review comment:
   @HyukjinKwon, i can use `USING file_format` explicitly that would serve the 
purpose but i thought it is better to fix this and make it inline with `CTAS` 
behavior  which is fixed in this 
[PR](https://github.com/apache/spark/pull/22514)
   
   If you agree to go ahead i can try making changes in analysis/optimizing 
layer instead of parser as suggested by you. 
   Sure i will follow the template , I have updated the PR description.  thanks 
:)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to