GitHub user viirya opened a pull request:

    https://github.com/apache/spark/pull/13134

    [SPARK-15342][SQL][PySpark] PySpark test for non ascii column name does not 
actually test with unicode column name

    ## What changes were proposed in this pull request?
    
    The PySpark SQL `test_column_name_with_non_ascii` wants to test non-ascii 
column name. But it doesn't actually test it. We need to construct an unicode 
explicitly using `unicode` under Python 2.
    
    ## How was this patch tested?
    
    Existing tests.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/viirya/spark-1 
correct-non-ascii-colname-pytest

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/13134.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #13134
    
----
commit 5f5df821fa91049b4ae0b16062483c1512f7d3e5
Author: Liang-Chi Hsieh <[email protected]>
Date:   2016-05-16T10:07:15Z

    Make the test really test non-ascii column name.

commit 494aee2857966359d4211c6bc6d403bf4ccff6f3
Author: Liang-Chi Hsieh <[email protected]>
Date:   2016-05-16T10:23:42Z

    Test if column name is unicode.

commit b5abe422e3e9eb40094e0a955ee4aa2d8280a7f5
Author: Liang-Chi Hsieh <[email protected]>
Date:   2016-05-16T10:40:22Z

    Consider both python 2 and 3.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to