[
https://issues.apache.org/jira/browse/SPARK-20336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15982702#comment-15982702
]
HanCheol Cho commented on SPARK-20336:
--------------------------------------
Thank you for your additiona test [~original-brownbear]].
I also did the same test in our Hadoop cluster and the result was same as
follows.
{code}
$ spark-shell --master yarn --deploy-mode client
Setting default log level to "WARN".
...
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.2.0-SNAPSHOT
/_/
...
scala> spark.read.option("wholeFile", true).option("header",
true).csv("file:///tmp/test.encoding.csv").show()
+----+----+--------------------+
|col1|col2| col3|
+----+----+--------------------+
| 1| a| text|
| 2| b| ������������|
| 3| c| ���������|
| 4| d|text
������������...|
| 5| e| last|
+----+----+--------------------+
{code}
So, it seems like a problem in our cluster.
[~hyukjin.kwon] I will close this issue since it is not a problem in Spark
module.
Thank you for your help too.
Best wishes,
Han-Cheol
> spark.read.csv() with wholeFile=True option fails to read non ASCII unicode
> characters
> --------------------------------------------------------------------------------------
>
> Key: SPARK-20336
> URL: https://issues.apache.org/jira/browse/SPARK-20336
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.2.0
> Environment: Spark 2.2.0 (master branch is downloaded from Github)
> PySpark
> Reporter: HanCheol Cho
>
> I used spark.read.csv() method with wholeFile=True option to load data that
> has multi-line records.
> However, non-ASCII characters are not properly loaded.
> The following is a sample data for test:
> {code:none}
> col1,col2,col3
> 1,a,text
> 2,b,テキスト
> 3,c,텍스트
> 4,d,"text
> テキスト
> 텍스트"
> 5,e,last
> {code}
> When it is loaded without wholeFile=True option, non-ASCII characters are
> shown correctly although multi-line records are parsed incorrectly as follows:
> {code:none}
> testdf_default = spark.read.csv("test.encoding.csv", header=True)
> testdf_default.show()
> +----+----+----+
> |col1|col2|col3|
> +----+----+----+
> | 1| a|text|
> | 2| b|テキスト|
> | 3| c| 텍스트|
> | 4| d|text|
> |テキスト|null|null|
> | 텍스트"|null|null|
> | 5| e|last|
> +----+----+----+
> {code}
> When wholeFile=True option is used, non-ASCII characters are broken as
> follows:
> {code:none}
> testdf_wholefile = spark.read.csv("test.encoding.csv", header=True,
> wholeFile=True)
> testdf_wholefile.show()
> +----+----+--------------------+
> |col1|col2| col3|
> +----+----+--------------------+
> | 1| a| text|
> | 2| b| ������������|
> | 3| c| ���������|
> | 4| d|text
> ������������...|
> | 5| e| last|
> +----+----+--------------------+
> {code}
> The result is same even if I use encoding="UTF-8" option with wholeFile=True.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]