juliuszsompolski commented on a change in pull request #28705:
URL: https://github.com/apache/spark/pull/28705#discussion_r434543531



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/HiveResult.scala
##########
@@ -37,30 +37,45 @@ object HiveResult {
    * Returns the result as a hive compatible sequence of strings. This is used 
in tests and
    * `SparkSQLDriver` for CLI applications.
    */
-  def hiveResultString(executedPlan: SparkPlan): Seq[String] = executedPlan 
match {
-    case ExecutedCommandExec(_: DescribeCommandBase) =>
-      formatDescribeTableOutput(executedPlan.executeCollectPublic())
-    case _: DescribeTableExec =>
-      formatDescribeTableOutput(executedPlan.executeCollectPublic())
-    // SHOW TABLES in Hive only output table names while our v1 command outputs
-    // database, table name, isTemp.
-    case command @ ExecutedCommandExec(s: ShowTablesCommand) if !s.isExtended 
=>
-      command.executeCollect().map(_.getString(1))
-    // SHOW TABLES in Hive only output table names while our v2 command outputs
-    // namespace and table name.
-    case command : ShowTablesExec =>
-      command.executeCollect().map(_.getString(1))
-    // SHOW VIEWS in Hive only outputs view names while our v1 command outputs
-    // namespace, viewName, and isTemporary.
-    case command @ ExecutedCommandExec(_: ShowViewsCommand) =>
-      command.executeCollect().map(_.getString(1))
-    case other =>
-      val result: Seq[Seq[Any]] = 
other.executeCollectPublic().map(_.toSeq).toSeq
-      // We need the types so we can output struct field names
-      val types = executedPlan.output.map(_.dataType)
-      // Reformat to match hive tab delimited output.
-      result.map(_.zip(types).map(e => toHiveString(e)))

Review comment:
       Following that example:
   ```
   $ export TZ="Europe/Moscow"
   $ ./bin/spark-sql -S
   spark-sql> set spark.sql.session.timeZone=America/Los_Angeles;
   spark.sql.session.timeZone   America/Los_Angeles
   spark-sql> select date '2020-06-03';
   2020-06-02
   spark-sql> select make_date(2020, 6, 3);
   2020-06-02
   ```
   Could you explain why does the make_date(2020, 6, 3) -> 2020-06-02 happens?
   Does make_date create a date of midnight 2020-6-3 in Moscow TZ, and it gets 
returned in America/Los_Angeles, where it is still 2020-6-2?
   Could you explain step by step with examples what type and what timezone is 
used during parsing, during collecting, and for the string display before and 
after the changes?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to