Github user liancheng commented on the pull request:

    https://github.com/apache/spark/pull/5733#issuecomment-128017486
  
    Refactored this PR with https://github.com/chenghao-intel/spark/pull/2. 
Major changes:
    
    - Remove `spark.sql.hive.writeDataSourceSchema`.
    - Always persist the data source relation in Hive compatible format when 
possible, and give warning logs to indicate why we can't do this.
    - Now we ony persist non-partitioned `HadoopFsRelation` with a single input 
path. The original PR only persists partition columns information without 
adding individual partitions, so persisting partitioned tables actually doesn't 
work.
    - Refactor test cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to