Github user MaxGekk commented on a diff in the pull request:
https://github.com/apache/spark/pull/20849#discussion_r175282099
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JSONOptions.scala
---
@@ -85,6 +85,12 @@ private[sql] class JSONOptions(
val multiLine =
parameters.get("multiLine").map(_.toBoolean).getOrElse(false)
+ /**
+ * Standard charset name. For example UTF-8, UTF-16 and UTF-32.
+ * If charset is not specified (None), it will be detected automatically.
--- End diff --
ok. How this one helps to solve the problem that I am trying to solve by
this PR: jackson's charset auto-detection mechanism can fail on even UTF-8
encoding and can infer wrong charset (see
https://github.com/apache/spark/pull/20302) due to many reasons. And an user
doesn't have any possibilities to fix the issue and bypass the auto-detection.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]