AngersZhuuuu opened a new pull request #34519:
URL: https://github.com/apache/spark/pull/34519


   ### What changes were proposed in this pull request?
   For case 
   ```
   withTempDir { dir =>
         withSQLConf(HiveUtils.CONVERT_METASTORE_PARQUET.key -> "false") {
           withTable("test_precision") {
             val df = sql("SELECT 'dummy' AS name, 
1000000000000000000010.7000000000000010 AS value")
             df.write.mode("Overwrite").parquet(dir.getAbsolutePath)
             sql(
               s"""
                  |CREATE EXTERNAL TABLE test_precision(name STRING, value 
DECIMAL(18,6))
                  |STORED AS PARQUET LOCATION '${dir.getAbsolutePath}'
                  |""".stripMargin)
             checkAnswer(sql("SELECT * FROM test_precision"), Row("dummy", 
null))
           }
         }
       }
   ```
   
   We write a data with schema 
   
   
   It's caused by you create a df with 
   ```
   root
    |-- name: string (nullable = false)
    |-- value: decimal(38,16) (nullable = false)
   ```
   but create table schema 
   
   ```
   root
    |-- name: string (nullable = false)
    |-- value: decimal(18,6) (nullable = false)
   ```
   
   This will cause enforcePrecisionScale return `null`
   ```
     public HiveDecimal getPrimitiveJavaObject(Object o) {
       return o == null ? null : 
this.enforcePrecisionScale(((HiveDecimalWritable)o).getHiveDecimal());
     }
   ```
   Then throw NPE when call `toCatalystDecimal `
   
   We should judge if the return value is `null` to avoid throw NPE
   
   ### Why are the changes needed?
   Fix bug
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   ### How was this patch tested?
   Added UT


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to