[
https://issues.apache.org/jira/browse/SPARK-23410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16364788#comment-16364788
]
Bruce Robbins commented on SPARK-23410:
---------------------------------------
I am probably misunderstanding the issue, but I couldn't load UTF-16 (big
endian or little endian) encoded JSON files using DataFrameReader.json() (e.g.,
spark.read.json) in Spark 2.2.1 or even Spark 2.1.2 for that matter. It always
resulted in a DataSet with "_corrupt_record" column.
> Unable to read jsons in charset different from UTF-8
> ----------------------------------------------------
>
> Key: SPARK-23410
> URL: https://issues.apache.org/jira/browse/SPARK-23410
> Project: Spark
> Issue Type: Bug
> Components: Input/Output
> Affects Versions: 2.3.0
> Reporter: Maxim Gekk
> Priority: Major
>
> Currently the Json Parser is forced to read json files in UTF-8. Such
> behavior breaks backward compatibility with Spark 2.2.1 and previous versions
> that can read json files in UTF-16, UTF-32 and other encodings due to using
> of the auto detection mechanism of the jackson library. Need to give back to
> users possibility to read json files in specified charset and/or detect
> charset automatically as it was before.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]