AngersZhuuuu commented on a change in pull request #29085:
URL: https://github.com/apache/spark/pull/29085#discussion_r456193004



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala
##########
@@ -712,14 +713,10 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder(conf) {
           None
         }
         (Seq.empty, Option(name), props.toSeq, recordHandler)
-
+      // SPARK-32106: When there is no definition about format, we return 
empty result
+      // then we finally execute with SparkScriptTransformationExec

Review comment:
       Down and I keep it like 
   ```
   case null if conf.getConf(CATALOG_IMPLEMENTATION).equals("hive") =>
           // Use default (serde) format.
           val name = conf.getConfString("hive.script.serde",
             "org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe")
           val props = Seq("field.delim" -> "\t")
           val recordHandler = Option(conf.getConfString(configKey, 
defaultConfigValue))
           (Nil, Option(name), props, recordHandler)
   
         // SPARK-32106: When there is no definition about format, we return 
empty result
         // to use a built-in default Serde in SparkScriptTransformationExec.
         case null =>
           (Nil, None, Seq.empty, None)
   ```
   
   The way to define how to use SparkScriptTransformExec or 
HiveScriptTransformExec is still need to be refactored after Spark's own serde 
added after @alfozan is pr.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to