GitHub user dongjoon-hyun opened a pull request:

    https://github.com/apache/spark/pull/13730

    [SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throws 
non-intuitive exception

    ## What changes were proposed in this pull request?
    
    This PR fixes the error message on writing `emptyDataFrame` having no 
fields.
    
    **Before**
    ```scala
    scala> spark.emptyDataFrame.write.text("p")
    org.apache.spark.sql.AnalysisException: Cannot use all columns for 
partition columns;
    ```
    
    **After**
    ```scala
    scala> spark.emptyDataFrame.write.text("p")
    org.apache.spark.sql.AnalysisException: Cannot write dataset with no fields;
    ```
    
    ## How was this patch tested?
    
    Pass the Jenkins tests including updated test cases.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/dongjoon-hyun/spark SPARK-16006

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/13730.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #13730
    
----
commit da1017e10869b922f794f7503340f0d97871d5a8
Author: Dongjoon Hyun <[email protected]>
Date:   2016-06-17T05:22:38Z

    [SPARK-16006][SQL] Attemping to write empty DataFrame with no fields throw 
non-intuitive exception

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to