GitHub user MaxGekk opened a pull request:

    https://github.com/apache/spark/pull/22956

    [SPARK-25950][SQL] from_csv should respect to 
spark.sql.columnNameOfCorruptRecord

    ## What changes were proposed in this pull request?
    
    Fix for `CsvToStructs` to take into account SQL config 
`spark.sql.columnNameOfCorruptRecord` similar to `from_json`.
    
    ## How was this patch tested?
    
    Added new test where `spark.sql.columnNameOfCorruptRecord` is set to 
corrupt column name different from default.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/MaxGekk/spark-1 csv-tests

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/22956.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #22956
    
----
commit 797dfc68da7a1038cd9c2e725d44ca4561a16edd
Author: Maxim Gekk <max.gekk@...>
Date:   2018-11-06T13:15:19Z

    Added a test

commit 0767c50dc9419060ce9ef446fa58db4c2c95a9ab
Author: Maxim Gekk <max.gekk@...>
Date:   2018-11-06T13:15:40Z

    Taking into account SQL config

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to