[
https://issues.apache.org/jira/browse/SPARK-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15419040#comment-15419040
]
Barry Becker commented on SPARK-17039:
--------------------------------------
I do specify a schema (.schema(dfSchema)), and it says that the column is a
date column. I left it out because there were lots of other columns, and I need
to spend some time to simplify the example. This is from a unit test that
worked fine using spark 1.6.2, but fails using spark 2.0.0. I'm pretty sure its
a real bug. The example in the stack overflow post may provide a better
reproducible case.
> cannot read null dates from csv file
> ------------------------------------
>
> Key: SPARK-17039
> URL: https://issues.apache.org/jira/browse/SPARK-17039
> Project: Spark
> Issue Type: Bug
> Components: Input/Output
> Affects Versions: 2.0.0
> Reporter: Barry Becker
>
> I see this exact same bug as reported in this [stack overflow
> post|http://stackoverflow.com/questions/38265640/spark-2-0-pre-csv-parsing-error-if-missing-values-in-date-column]
> using Spark 2.0.0 (released version).
> In scala, I read a csv using
> sqlContext.read
> .format("csv")
> .option("header", "false")
> .option("inferSchema", "false")
> .option("nullValue", "?")
> .schema(dfSchema)
> .csv(dataFile)
> The data contains some null dates (represented with ?).
> The error I get is:
> {code}
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
> stage 8.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8.0
> (TID 10, localhost): java.text.ParseException: Unparseable date: "?"
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]