[ 
https://issues.apache.org/jira/browse/SPARK-32961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17201338#comment-17201338
 ] 

Hyukjin Kwon commented on SPARK-32961:
--------------------------------------

UTF-16 doesn't correctly work with CSV when {{multiLine}} is disabled. It 
should be either UTF-16LE or UTF-16BE explicitly.

This is because BOM exists at the first of the CSV file (by UTF-16), and the 
CSV parsing process happen in the partitions of the file which does not contain 
the BOM.

To workaround, you can enable {{multiLine}}, or use  UTF-16LE or UTF-16BE.

> PySpark CSV read with UTF-16 encoding is not working correctly
> --------------------------------------------------------------
>
>                 Key: SPARK-32961
>                 URL: https://issues.apache.org/jira/browse/SPARK-32961
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.4, 3.0.1
>         Environment: both spark local and cluster mode
>            Reporter: Bui Bao Anh
>            Priority: Major
>              Labels: Correctness
>         Attachments: pandas df.png, pyspark df.png, sendo_sample.csv
>
>
> There are weird characters in the output when printing out to console or 
> writing to files.
> Find attached files to see how it look in Spark Dataframe and Pandas 
> Dataframe.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to