GitHub user HyukjinKwon opened a pull request:

    https://github.com/apache/spark/pull/18200

    [SPARK-20978][SQL] Set null for malformed column when the number of tokens 
is less than schema

    ## What changes were proposed in this pull request?
    
    This PR proposes to fix NPE when the number of tokens are less then the 
schema (namely the input was parsed all but the number of tokens is not matched 
with the one of schema).
    
    **Before**
    
    ```scala
    scala> spark.read.schema("a string, b string, unparsed 
string").option("columnNameOfCorruptRecord", 
"unparsed").csv(Seq("a").toDS).show()
    17/06/05 13:59:26 ERROR Executor: Exception in task 0.0 in stage 3.0 (TID 3)
    java.lang.NullPointerException
        at 
scala.collection.immutable.StringLike$class.stripLineEnd(StringLike.scala:89)
        at scala.collection.immutable.StringOps.stripLineEnd(StringOps.scala:29)
        at 
org.apache.spark.sql.execution.datasources.csv.UnivocityParser.org$apache$spark$sql$execution$datasources$csv$UnivocityParser$$getCurrentInput(UnivocityParser.scala:56)
        at 
org.apache.spark.sql.execution.datasources.csv.UnivocityParser$$anonfun$org$apache$spark$sql$execution$datasources$csv$UnivocityParser$$convert$1.apply(UnivocityParser.scala:211)
        at 
org.apache.spark.sql.execution.datasources.csv.UnivocityParser$$anonfun$org$apache$spark$sql$execution$datasources$csv$UnivocityParser$$convert$1.apply(UnivocityParser.scala:211)
        at 
org.apache.spark.sql.execution.datasources.FailureSafeParser$$anonfun$2.apply(FailureSafeParser.scala:50)
        at 
org.apache.spark.sql.execution.datasources.FailureSafeParser$$anonfun$2.apply(FailureSafeParser.scala:43)
        at 
org.apache.spark.sql.execution.datasources.FailureSafeParser.parse(FailureSafeParser.scala:64)
        at 
org.apache.spark.sql.DataFrameReader$$anonfun$11$$anonfun$apply$4.apply(DataFrameReader.scala:471)
        at 
org.apache.spark.sql.DataFrameReader$$anonfun$11$$anonfun$apply$4.apply(DataFrameReader.scala:471)
        at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
        at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    ```
    
    **After**
    
    ```scala
    scala> spark.read.schema("a string, b string, unparsed 
string").option("columnNameOfCorruptRecord", 
"unparsed").csv(Seq("a").toDS).show()
    +---+----+--------+
    |  a|   b|unparsed|
    +---+----+--------+
    |  a|null|    null|
    +---+----+--------+
    ```
    
    It looks we might not need to put this in the malformed column as actually 
in this case the input is parsed correctly and putting in it might not useful 
much (whereas the number of tokens exceeds the schema, we need to know the 
truncated contents).
    
    ## How was this patch tested?
    
    Unit tests added in `CSVSuite`.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/HyukjinKwon/spark less-token-npe

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/18200.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #18200
    
----
commit 046566cff5555ed67ae6497a33a1965457b29666
Author: hyukjinkwon <[email protected]>
Date:   2017-06-05T05:17:19Z

    Set null for malformed column when the number of tokens is less than schema

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to