Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17394#discussion_r109300476
--- Diff: sql/core/src/test/resources/sql-tests/results/describe.sql.out ---
@@ -1,205 +1,259 @@
-- Automatically generated by SQLQueryTestSuite
--- Number of queries: 14
+-- Number of queries: 31
-- !query 0
-CREATE TABLE t (a STRING, b INT, c STRING, d STRING) USING parquet
PARTITIONED BY (c, d) COMMENT 'table_comment'
+CREATE TABLE t (a STRING, b INT, c STRING, d STRING) USING parquet
+ PARTITIONED BY (c, d) CLUSTERED BY (a) SORTED BY (b ASC) INTO 2 BUCKETS
+ COMMENT 'table_comment'
-- !query 0 schema
struct<>
-- !query 0 output
-- !query 1
-ALTER TABLE t ADD PARTITION (c='Us', d=1)
+CREATE TEMPORARY VIEW temp_v AS SELECT * FROM t
-- !query 1 schema
struct<>
-- !query 1 output
-- !query 2
-DESCRIBE t
+CREATE TEMPORARY VIEW temp_Data_Source_View
+ USING org.apache.spark.sql.sources.DDLScanSource
+ OPTIONS (
+ From '1',
+ To '10',
+ Table 'test1')
-- !query 2 schema
-struct<col_name:string,data_type:string,comment:string>
+struct<>
-- !query 2 output
-# Partition Information
+
+
+
+-- !query 3
+CREATE VIEW v AS SELECT * FROM t
+-- !query 3 schema
+struct<>
+-- !query 3 output
+
+
+
+-- !query 4
+ALTER TABLE t ADD PARTITION (c='Us', d=1)
+-- !query 4 schema
+struct<>
+-- !query 4 output
+
+
+
+-- !query 5
+DESCRIBE t
+-- !query 5 schema
+struct<col_name:string,data_type:string,comment:string>
+-- !query 5 output
# col_name data_type comment
a string
b int
c string
-c string
d string
+# Partition Information
+# col_name data_type comment
+c string
d string
--- !query 3
-DESC t
--- !query 3 schema
+-- !query 6
+DESC default.t
+-- !query 6 schema
struct<col_name:string,data_type:string,comment:string>
--- !query 3 output
-# Partition Information
+-- !query 6 output
# col_name data_type comment
a string
b int
c string
-c string
d string
+# Partition Information
+# col_name data_type comment
+c string
d string
--- !query 4
+-- !query 7
DESC TABLE t
--- !query 4 schema
+-- !query 7 schema
struct<col_name:string,data_type:string,comment:string>
--- !query 4 output
-# Partition Information
+-- !query 7 output
# col_name data_type comment
a string
b int
c string
-c string
d string
+# Partition Information
+# col_name data_type comment
+c string
d string
--- !query 5
+-- !query 8
DESC FORMATTED t
--- !query 5 schema
+-- !query 8 schema
struct<col_name:string,data_type:string,comment:string>
--- !query 5 output
-# Detailed Table Information
-# Partition Information
-# Storage Information
+-- !query 8 output
# col_name data_type comment
-Comment: table_comment
-Compressed: No
-Created:
-Database: default
-Last Access:
-Location: sql/core/spark-warehouse/t
-Owner:
-Partition Provider: Catalog
-Storage Desc Parameters:
-Table Parameters:
-Table Type: MANAGED
a string
b int
c string
+d string
+# Partition Information
+# col_name data_type comment
c string
d string
-d string
+
+# Detailed Table Information
+Database default
+Table t
+Created [not included in comparison]
+Last Access [not included in comparison]
+Type MANAGED
+Provider parquet
+Num Buckets 2
+Bucket Columns [`a`]
+Sort Columns [`b`]
+Comment table_comment
+Location [not included in comparison]sql/core/spark-warehouse/t
+Partition Provider Catalog
--- !query 6
+-- !query 9
DESC EXTENDED t
--- !query 6 schema
+-- !query 9 schema
struct<col_name:string,data_type:string,comment:string>
--- !query 6 output
-# Detailed Table Information CatalogTable(
- Table: `default`.`t`
- Created:
- Last Access:
- Type: MANAGED
- Schema: [StructField(a,StringType,true),
StructField(b,IntegerType,true), StructField(c,StringType,true),
StructField(d,StringType,true)]
- Provider: parquet
- Partition Columns: [`c`, `d`]
- Comment: table_comment
- Storage(Location: sql/core/spark-warehouse/t)
- Partition Provider: Catalog)
-# Partition Information
+-- !query 9 output
# col_name data_type comment
a string
b int
c string
+d string
+# Partition Information
+# col_name data_type comment
c string
d string
-d string
+
+# Detailed Table Information
+Database default
+Table t
+Created [not included in comparison]
+Last Access [not included in comparison]
+Type MANAGED
+Provider parquet
+Num Buckets 2
+Bucket Columns [`a`]
+Sort Columns [`b`]
+Comment table_comment
+Location [not included in comparison]sql/core/spark-warehouse/t
+Partition Provider Catalog
--- !query 7
+-- !query 10
DESC t PARTITION (c='Us', d=1)
--- !query 7 schema
+-- !query 10 schema
struct<col_name:string,data_type:string,comment:string>
--- !query 7 output
-# Partition Information
+-- !query 10 output
# col_name data_type comment
a string
b int
c string
-c string
d string
+# Partition Information
+# col_name data_type comment
+c string
d string
--- !query 8
+-- !query 11
DESC EXTENDED t PARTITION (c='Us', d=1)
--- !query 8 schema
+-- !query 11 schema
struct<col_name:string,data_type:string,comment:string>
--- !query 8 output
-# Partition Information
+-- !query 11 output
# col_name data_type comment
-Detailed Partition Information CatalogPartition(
- Partition Values: [c=Us, d=1]
- Storage(Location: sql/core/spark-warehouse/t/c=Us/d=1)
- Partition Parameters:{})
a string
b int
c string
+d string
+# Partition Information
+# col_name data_type comment
c string
d string
-d string
+
+# Detailed Partition Information
+Database default
+Table t
+Partition Values [c=Us, d=1]
+Location [not included in comparison]sql/core/spark-warehouse/t/c=Us/d=1
+
+# Table Storage Information
--- End diff --
Hive does it. See the following output.
```
hive> DESC FORMATTED page_view PARTITION(dt='part1', country='part2');
OK
# col_name data_type comment
viewtime int
userid bigint
page_url string
referrer_url string
ip string IP Address of the User
# Partition Information
# col_name data_type comment
dt string
country string
# Detailed Partition Information
Partition Value: [part1, part2]
Database: default
Table: page_view
CreateTime: Sat Apr 01 17:05:25 UTC 2017
LastAccessTime: UNKNOWN
Location:
file:/user/hive/warehouse/page_view/dt=part1/country=part2
Partition Parameters:
COLUMN_STATS_ACCURATE {\"BASIC_STATS\":\"true\"}
numFiles 0
numRows 0
rawDataSize 0
totalSize 0
transient_lastDdlTime 1491066325
# Storage Information
SerDe Library:
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
InputFormat:
org.apache.hadoop.mapred.SequenceFileInputFormat
OutputFormat:
org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
Compressed: No
Num Buckets: -1
Bucket Columns: []
Sort Columns: []
Storage Desc Params:
serialization.format 1
Time taken: 0.124 seconds, Fetched: 39 row(s)
hive> DESC EXTENDED page_view PARTITION(dt='part1', country='part2');
OK
viewtime int
userid bigint
page_url string
referrer_url string
ip string IP Address of the User
dt string
country string
# Partition Information
# col_name data_type comment
dt string
country string
Detailed Partition Information Partition(values:[part1, part2],
dbName:default, tableName:page_view, createTime:1491066325, lastAccessTime:0,
sd:StorageDescriptor(cols:[FieldSchema(name:viewtime, type:int, comment:null),
FieldSchema(name:userid, type:bigint, comment:null), FieldSchema(name:page_url,
type:string, comment:null), FieldSchema(name:referrer_url, type:string,
comment:null), FieldSchema(name:ip, type:string, comment:IP Address of the
User), FieldSchema(name:dt, type:string, comment:null),
FieldSchema(name:country, type:string, comment:null)],
location:file:/user/hive/warehouse/page_view/dt=part1/country=part2,
inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat,
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,
parameters:{serialization.format=1}), bucketCols:[], sortCols:[],
parameters:{}, skewedInfo:SkewedI
nfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{}),
storedAsSubDirectories:false), parameters:{totalSize=0, numRows=0,
rawDataSize=0, COLUMN_STATS_ACCURATE={"BASIC_STATS":"true"}, numFiles=0,
transient_lastDdlTime=1491066325})
Time taken: 0.116 seconds, Fetched: 15 row(s)
```--- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at [email protected] or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
