Ngone51 commented on a change in pull request #26595: [SPARK-29956][SQL] A 
literal number with an exponent should be parsed to Double
URL: https://github.com/apache/spark/pull/26595#discussion_r351102947
 
 

 ##########
 File path: docs/sql-migration-guide.md
 ##########
 @@ -225,6 +225,8 @@ license: |
   - Since Spark 3.0, when casting string value to integral types, including 
tinyint, smallint, int and bigint type, the leading and trailing white 
spaces(<= ACSII 32) will be trimmed before convert to integral values, e.g. 
`cast(' 1 ' as int)` results `1`. In Spark version 2.4 and earlier, the result 
will be `null`.
 
   - Since Spark 3.0, when casting string value to date, timestamp and interval 
values, the leading and trailing white spaces(<= ACSII 32) will be trimmed 
before casing, e.g. `cast('2019-10-10\t as date)` results the date value 
`2019-10-10`. In Spark version 2.4 and earlier, only the trailing space will be 
removed, thus, the result is `null`.
+  
+  - Since Spark 3.0, numbers written in scientific notation(e.g. `1E2`) would 
be parsed as Double. In Spark version 2.4 and earlier, they're parsed as 
Decimal. To restore the behavior before Spark 3.0, you can set 
`spark.sql.legacy.exponentLiteralAsDecimal.enabled` to `true`.
 
 Review comment:
   Wait, wait. `numeric` represents **exact** numeric/number in pg sql. But we 
may wish an approximate numeric/number here as we want to parse it as Double. 
So, in order to not make user be confused with these concepts, I'd prefer to 
original way. WDYT ? @maropu 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to