Hyukjin Kwon created SPARK-20978:
------------------------------------

             Summary: CSV emits NPE when the number of tokens is less than 
given schema and corrupt column is given
                 Key: SPARK-20978
                 URL: https://issues.apache.org/jira/browse/SPARK-20978
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.2.0, 2.3.0
            Reporter: Hyukjin Kwon


Currently, if the number of tokens is less than the given schema, CSV 
datasource throws an NPE as below:

{code}
scala> spark.read.schema("a string, b string, unparsed 
string").option("columnNameOfCorruptRecord", 
"unparsed").csv(Seq("a").toDS).show()
17/06/05 13:59:26 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
java.lang.NullPointerException
        at 
scala.collection.immutable.StringLike$class.stripLineEnd(StringLike.scala:89)
        at scala.collection.immutable.StringOps.stripLineEnd(StringOps.scala:29)
        at 
org.apache.spark.sql.execution.datasources.csv.UnivocityParser.org$apache$spark$sql$execution$datasources$csv$UnivocityParser$$getCurrentInput(UnivocityParser.scala:56)
        at 
org.apache.spark.sql.execution.datasources.csv.UnivocityParser$$anonfun$org$apache$spark$sql$execution$datasources$csv$UnivocityParser$$convert$1.apply(UnivocityParser.scala:211)
        at 
org.apache.spark.sql.execution.datasources.csv.UnivocityParser$$anonfun$org$apache$spark$sql$execution$datasources$csv$UnivocityParser$$convert$1.apply(UnivocityParser.scala:211)
        at 
org.apache.spark.sql.execution.datasources.FailureSafeParser$$anonfun$2.apply(FailureSafeParser.scala:50)
        at 
org.apache.spark.sql.execution.datasources.FailureSafeParser$$anonfun$2.apply(FailureSafeParser.scala:43)
        at 
org.apache.spark.sql.execution.datasources.FailureSafeParser.parse(FailureSafeParser.scala:64)
        at 
org.apache.spark.sql.DataFrameReader$$anonfun$11$$anonfun$apply$4.apply(DataFrameReader.scala:471)
        at 
org.apache.spark.sql.DataFrameReader$$anonfun$11$$anonfun$apply$4.apply(DataFrameReader.scala:471)
        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
{code}

If this is not given, it works as below:

{code}
scala> spark.read.schema("a string, b string, unparsed 
string").csv(Seq("a").toDS).show()
+---+----+--------+
|  a|   b|unparsed|
+---+----+--------+
|  a|null|    null|
+---+----+--------+
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to