AngersZhuuuu commented on a change in pull request #32897:
URL: https://github.com/apache/spark/pull/32897#discussion_r650886409
##########
File path: docs/sql-migration-guide.md
##########
@@ -48,8 +48,8 @@ license: |
- In Spark 3.2, the auto-generated `Cast` (such as those added by type
coercion rules) will be stripped when generating column alias names. E.g.,
`sql("SELECT floor(1)").columns` will be `FLOOR(1)` instead of `FLOOR(CAST(1 AS
DOUBLE))`.
- In Spark 3.2, the output schema of `SHOW TABLES` becomes `namespace:
string, tableName: string, isTemporary: boolean`. In Spark 3.1 or earlier, the
`namespace` field was named `database` for the builtin catalog, and there is no
`isTemporary` field for v2 catalogs. To restore the old schema with the builtin
catalog, you can set `spark.sql.legacy.keepCommandOutputSchema` to `true`.
-
- - In Spark 3.2, the output schema of `SHOW TABLE EXTENDED` becomes
`namespace: string, tableName: string, isTemporary: boolean, information:
string`. In Spark 3.1 or earlier, the `namespace` field was named `database`
for the builtin catalog, and no change for the v2 catalogs. To restore the old
schema with the builtin catalog, you can set
`spark.sql.legacy.keepCommandOutputSchema` to `true`.
+
+ - In Spark 3.2, the output schema of `SHOW TABLE EXTENDED` becomes
`namespace: string, tableName: string, isTemporary: boolean,
information:struct<Database:string,Table:string,Owner:string,Created
Time:date,Last Access:date,Created
By:string,Type:string,Provider:string,Bucket:struct<Num Buckets:string,Bucket
Columns:string,Sort Columns:string>,Comment:string,View Information:struct<View
Text:string,View Original Text:string,View Catalog and Namespace:string,View
Query Output Columns:string>,Table
Properties:string,Statistics:string,Storage:struct<Location:string,Serde
Library:string,InputFormat:string,OutputFormat:string,Compressed:string,Storage
Properties:string>,Partition Provider:string,Partition Columns:string,Partition
Values:string,Partition Parameters:string,Partition
Statistics:string,schema:string>`. In Spark 3.1 or earlier, the `namespace`
field was named `database` for the builtin catalog, and no change for the v2
catalogs. In Spark 3.1 or earlier, the `information` fi
eld was string type. To restore the old schema, you can set
`spark.sql.legacy.keepCommandOutputSchema` to `true`.
Review comment:
> Can we just show the result from df.printSchema in a code block? seems
difficult to read.
How about current ?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]