Github user aa8y commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20068#discussion_r158606107
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
 ---
    @@ -152,7 +152,7 @@ class CSVOptions(
         
writerSettings.setIgnoreLeadingWhitespaces(ignoreLeadingWhiteSpaceFlagInWrite)
         
writerSettings.setIgnoreTrailingWhitespaces(ignoreTrailingWhiteSpaceFlagInWrite)
         writerSettings.setNullValue(nullValue)
    -    writerSettings.setEmptyValue(nullValue)
    +    writerSettings.setEmptyValue("")
    --- End diff --
    
    I disagree. I don't think the previous behavior should _not_ be exposed as 
an option as the previous behavior was a bug. All it did was that it _always_ 
coerced empty values to `null`s. If the `nullValue` was not set, then the it 
was set to `""` by default which coerced `""` to `null`. The empty value being 
set to `""` had no affect in this case. If it was set to something else, say 
`\N`, then the empty value was also set to `\N` which resulted in parsing both 
`\N` and `""` to `null`, as `""` was no longer considered as an empty value and 
the `""` being coerced to null is the Univocity parser's default.
    
    Setting empty value explicitly to the `""` literal would ensure that an 
empty string is always parsed as empty string, unless `nullValue` is not set or 
it is set to `""`, which is what people would do if they want `""` to be parsed 
as `null`, which would be the old behavior.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to