Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/20068#discussion_r159118238
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
---
@@ -152,7 +152,11 @@ class CSVOptions(
writerSettings.setIgnoreLeadingWhitespaces(ignoreLeadingWhiteSpaceFlagInWrite)
writerSettings.setIgnoreTrailingWhitespaces(ignoreTrailingWhiteSpaceFlagInWrite)
writerSettings.setNullValue(nullValue)
- writerSettings.setEmptyValue(nullValue)
+ // The Univocity parser parses empty strings as `null` by default.
This is the default behavior
+ // for Spark too, since `nullValue` defaults to an empty string and
has a higher precedence to
+ // setEmptyValue(). But when `nullValue` is set to a different value,
that would mean that the
+ // empty string should be parsed not as `null` but as an empty string.
+ writerSettings.setEmptyValue("")
--- End diff --
Please make it as a conf like what we did for `nullValue`?
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala#L613
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]