wangyum commented on a change in pull request #32563:
URL: https://github.com/apache/spark/pull/32563#discussion_r633104523



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/HiveResult.scala
##########
@@ -65,6 +65,11 @@ object HiveResult {
     // database, table name, isTemp.
     case command @ ExecutedCommandExec(s: ShowTablesCommand) if !s.isExtended 
=>
       command.executeCollect().map(_.getString(1))
+    // SHOW TABLE EXTENDED in Hive do not have isTemp while our v1 command 
outputs isTemp.
+    case command @ ExecutedCommandExec(s: ShowTablesCommand) if s.isExtended =>
+      command.executeCollect().map(_.getMap(3))
+        .map(kv => kv.keyArray().array.zip(kv.valueArray().array)
+          .map(kv => s"${kv._1}: ${kv._2}").mkString("\n"))

Review comment:
       Hive output:
   ```
   hive> SHOW TABLE EXTENDED LIKE '*';
   OK
   tableName:spark_32976
   owner:yumwang
   location:file:/tmp/spark/spark_32976
   inputformat:org.apache.hadoop.mapred.TextInputFormat
   outputformat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
   columns:struct columns { i32 id, string name}
   partitioned:true
   partitionColumns:struct partition_columns { string part}
   totalNumberFiles:unknown
   totalFileSize:unknown
   maxFileSize:unknown
   minFileSize:unknown
   lastAccessTime:unknown
   lastUpdateTime:unknown
   
   tableName:t1
   owner:yumwang
   location:file:/tmp/hive/t1
   inputformat:org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
   outputformat:org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
   columns:struct columns { string id}
   partitioned:true
   partitionColumns:struct partition_columns { date part}
   totalNumberFiles:unknown
   totalFileSize:unknown
   maxFileSize:unknown
   minFileSize:unknown
   lastAccessTime:unknown
   lastUpdateTime:unknown
   ```

##########
File path: docs/sql-migration-guide.md
##########
@@ -49,7 +49,7 @@ license: |
   
   - In Spark 3.2, the output schema of `SHOW TABLES` becomes `namespace: 
string, tableName: string, isTemporary: boolean`. In Spark 3.1 or earlier, the 
`namespace` field was named `database` for the builtin catalog, and there is no 
`isTemporary` field for v2 catalogs. To restore the old schema with the builtin 
catalog, you can set `spark.sql.legacy.keepCommandOutputSchema` to `true`.
   
-  - In Spark 3.2, the output schema of `SHOW TABLE EXTENDED` becomes 
`namespace: string, tableName: string, isTemporary: boolean, information: 
string`. In Spark 3.1 or earlier, the `namespace` field was named `database` 
for the builtin catalog, and no change for the v2 catalogs. To restore the old 
schema with the builtin catalog, you can set 
`spark.sql.legacy.keepCommandOutputSchema` to `true`.
+  - In Spark 3.2, the output schema of `SHOW TABLE EXTENDED` becomes 
`namespace: string, tableName: string, isTemporary: boolean, information: 
map[string, string]`. In Spark 3.1 or earlier, the `namespace` field was named 
`database` for the builtin catalog, and no change for the v2 catalogs. To 
restore the old schema with the builtin catalog, you can set 
`spark.sql.legacy.keepCommandOutputSchema` to `true`.

Review comment:
       Do we need to make `spark.sql.legacy.keepCommandOutputSchema` also apply 
to information column?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to