Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/19471
  
    waiting for more feedbacks before moving forward :)
    
    Another thing I wanna point out: for `sql("create table t using parquet 
options(skipHiveMetadata=true) location '/tmp/t'")`, it works in Spark 2.0, and 
the created table has a schema that the partition column is at the beginning. 
In Spark 2.1, it also works, and `DESC TABLE` also shows the table schema has 
partition column at the beginning. However, if you query the table, the output 
schema has partition column at the end.
    
    It's been a long time since Spark 2.1 was released and no one reports this 
behavior change. It seems this is really a corner case and makes me feel we 
should not compilcate our code too much for it.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to