Github user MaxGekk commented on a diff in the pull request:
https://github.com/apache/spark/pull/20849#discussion_r175283468
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JSONOptions.scala
---
@@ -85,6 +85,12 @@ private[sql] class JSONOptions(
val multiLine =
parameters.get("multiLine").map(_.toBoolean).getOrElse(false)
+ /**
+ * Standard charset name. For example UTF-8, UTF-16 and UTF-32.
+ * If charset is not specified (None), it will be detected automatically.
--- End diff --
A fix in hadoop line reader and this PR solve 2 different problem. Any fix
in hadoop line reader will not fix the problem of wrong encoding detection. I
don't understand why this PR must depend on a fix in line reader. I would say a
custom record separator will solve newline problem too
(https://issues.apache.org/jira/browse/SPARK-23724).
> Shouldn't we better fix text datasource with the hadoop's line reader
first?
Could you tell me how this PR blocks solving the problem in Hadoop's
LineReader?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]