Jurgis Pods created SPARK-21994:
-----------------------------------

             Summary: Spark 2.2 can not read Parquet table created by itself
                 Key: SPARK-21994
                 URL: https://issues.apache.org/jira/browse/SPARK-21994
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.2.0
         Environment: Spark 2.2 on Cloudera CDH 5.10.1, Hive 1.1
            Reporter: Jurgis Pods


This seems to be a new bug introduced in Spark 2.2, since it did not occur 
under Spark 2.1.

When writing a dataframe to a table in Parquet format, Spark SQL does not write 
the 'path' of the table to the Hive metastore, unlike in previous versions.

As a consequence, Spark 2.2 is not able to read the table it just created. It 
just outputs the table header without any row content. 

A parallel installation of Spark 1.6 at least produces an appropriate error 
trace:

{code:java}
17/09/13 10:22:12 WARN metastore.ObjectStore: Version information not found in 
metastore. hive.metastore.schema.verification is not enabled so recording the 
schema version 1.1.0
17/09/13 10:22:12 WARN metastore.ObjectStore: Failed to get database default, 
returning NoSuchObjectException
org.spark-project.guava.util.concurrent.UncheckedExecutionException: 
java.util.NoSuchElementException: key not found: path
[...]
{code}


h3. Steps to reproduce:

Run the following in spark2-shell:

{code:java}
scala> val df = spark.sql("show databases")
scala> df.show()
+--------------------+
|        databaseName|
+--------------------+
|               mydb1|
|               mydb2|
|             default|
|                test|
+--------------------+
scala> df.write.format("parquet").saveAsTable("test.spark22_test")
scala> spark.sql("select * from test.spark22_test").show()
+------------+
|databaseName|
+------------+
+------------+{code}

When manually setting the path, it works:


{code:java}
scala> df.write.option("path", 
"/hadoop/eco/hive/warehouse/test.db/spark22_parquet_with_path").format("parquet").saveAsTable("test.spark22_parquet_with_path")

scala> spark.sql("select * from test.spark22_parquet_with_path").show()
+--------------------+
|        databaseName|
+--------------------+
|               mydb1|
|               mydb2|
|             default|
|                test|
+--------------------+
{code}

It is kind of a disaster that we are not able to read tables created by the 
very same Spark version and have to manually specify the path as an explicit 
option.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to