Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15983
Yeah, table repair is expensive, but this causes an external behavior
change. I tried it in 2.0. It can show the whole data source table without
repairing the table. In 2.1, it returns empty unless we repair the table.
```Scala
scala> spark.range(5).selectExpr("id as fieldOne", "id as
partCol").write.partitionBy("partCol").mode("overwrite").saveAsTable("test")
[Stage 0:======================> (3 + 5)
/ 8]SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.
16/12/17 17:41:20 WARN CreateDataSourceTableUtils: Persisting partitioned
data source relation `test` into Hive metastore in Spark SQL specific format,
which is NOT compatible with Hive. Input path(s):
file:/Users/xiaoli/sparkBin/spark-2.0.2-bin-hadoop2.7/bin/spark-warehouse/test
scala> spark.sql("select * from test").show()
+--------+-------+
|fieldOne|partCol|
+--------+-------+
| 2| 2|
| 1| 1|
| 3| 3|
| 0| 0|
| 4| 4|
+--------+-------+
scala> spark.sql("desc formatted test").show(50, false)
+----------------------------+------------------------------------------------------------------------------+-------+
|col_name |data_type
|comment|
+----------------------------+------------------------------------------------------------------------------+-------+
...
| path
|file:/Users/xiaoli/sparkBin/spark-2.0.2-bin-hadoop2.7/bin/spark-warehouse/test|
|
+----------------------------+------------------------------------------------------------------------------+-------+
scala> spark.sql(s"create table newTab (fieldOne long, partCol int) using
parquet options (path
'/Users/xiaoli/sparkBin/spark-2.0.2-bin-hadoop2.7/bin/spark-warehouse/test')
partitioned by (partCol)")
16/12/17 17:43:24 WARN CreateDataSourceTableUtils: Persisting partitioned
data source relation `newTab` into Hive metastore in Spark SQL specific format,
which is NOT compatible with Hive. Input path(s):
file:/Users/xiaoli/sparkBin/spark-2.0.2-bin-hadoop2.7/bin/spark-warehouse/test
res3: org.apache.spark.sql.DataFrame = []
scala> spark.table("newTab").show()
+--------+-------+
|fieldOne|partCol|
+--------+-------+
| 2| 2|
| 1| 1|
| 3| 3|
| 0| 0|
| 4| 4|
+--------+-------+
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]