GitHub user dongjoon-hyun opened a pull request:

    https://github.com/apache/spark/pull/16472

    [SPARK-18877][SQL][BACKPORT-2.0] `CSVInferSchema.inferField` on DecimalType 
should find a common type with `typeSoFar`

    ## What changes were proposed in this pull request?
    
    CSV type inferencing causes `IllegalArgumentException` on decimal numbers 
with heterogeneous precisions and scales because the current logic uses the 
last decimal type in a **partition**. Specifically, `inferRowType`, the 
**seqOp** of **aggregate**, returns the last decimal type. This PR fixes it to 
use `findTightestCommonType`.
    
    **decimal.csv**
    ```
    9.03E+12
    1.19E+11
    ```
    
    **BEFORE**
    ```scala
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").printSchema
    root
     |-- _c0: decimal(3,-9) (nullable = true)
    
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").show
    16/12/16 14:32:49 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 4)
    java.lang.IllegalArgumentException: requirement failed: Decimal precision 4 
exceeds max precision 3
    ```
    
    **AFTER**
    ```scala
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").printSchema
    root
     |-- _c0: decimal(4,-9) (nullable = true)
    
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").show
    +---------+
    |      _c0|
    +---------+
    |9.030E+12|
    | 1.19E+11|
    +---------+
    ```
    
    
    ## How was this patch tested?
    
    Pass the newly add test case.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/dongjoon-hyun/spark SPARK-18877-BACKPORT-20

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/16472.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #16472
    
----
commit bd7f0d5cf4afc7f349232932adb0a97c8a58a8db
Author: Dongjoon Hyun <[email protected]>
Date:   2017-01-03T15:06:50Z

    [SPARK-18877][SQL][BACKPORT-2.0] `CSVInferSchema.inferField` on DecimalType 
should find a common type with `typeSoFar`
    
    CSV type inferencing causes `IllegalArgumentException` on decimal numbers 
with heterogeneous precisions and scales because the current logic uses the 
last decimal type in a **partition**. Specifically, `inferRowType`, the 
**seqOp** of **aggregate**, returns the last decimal type. This PR fixes it to 
use `findTightestCommonType`.
    
    **decimal.csv**
    ```
    9.03E+12
    1.19E+11
    ```
    
    **BEFORE**
    ```scala
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").printSchema
    root
     |-- _c0: decimal(3,-9) (nullable = true)
    
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").show
    16/12/16 14:32:49 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 4)
    java.lang.IllegalArgumentException: requirement failed: Decimal precision 4 
exceeds max precision 3
    ```
    
    **AFTER**
    ```scala
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").printSchema
    root
     |-- _c0: decimal(4,-9) (nullable = true)
    
    scala> spark.read.format("csv").option("inferSchema", 
true).load("decimal.csv").show
    +---------+
    |      _c0|
    +---------+
    |9.030E+12|
    | 1.19E+11|
    +---------+
    ```
    
    Pass the newly add test case.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to