[
https://issues.apache.org/jira/browse/SPARK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15704736#comment-15704736
]
Jork Zijlstra commented on SPARK-17916:
---------------------------------------
I also have the same issue in 2.0.1. This code seems to be the problem:
private def rowToString(row: InternalRow): Seq[String] = {
var i = 0
val values = new Array[String](row.numFields)
while (i < row.numFields) {
if (!row.isNullAt(i)) {
values(i) = valueConverters(i).apply(row, i)
} else {
values(i) = params.nullValue
}
i += 1
}
values
}
def castTo(
datum: String,
castType: DataType,
nullable: Boolean = true,
options: CSVOptions = CSVOptions()): Any = {
if (nullable && datum == options.nullValue) {
null
} else {
}
So first the missing value in the data in transformed into the nullValue. Then
in the castTo the value is checked against the nullValue, which is always true
for a missing value.
> CSV data source treats empty string as null no matter what nullValue option is
> ------------------------------------------------------------------------------
>
> Key: SPARK-17916
> URL: https://issues.apache.org/jira/browse/SPARK-17916
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.0.1
> Reporter: Hossein Falaki
>
> When user configures {{nullValue}} in CSV data source, in addition to those
> values, all empty string values are also converted to null.
> {code}
> data:
> col1,col2
> 1,"-"
> 2,""
> {code}
> {code}
> spark.read.format("csv").option("nullValue", "-")
> {code}
> We will find a null in both rows.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]