Github user gatorsmile commented on the issue:

    https://github.com/apache/spark/pull/19218
  
    I see. If you set `spark.sql.hive.convertMetastoreParquet` to false, you 
will also hit the issue for non-partitioned table. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to