HyukjinKwon commented on a change in pull request #23597: [SPARK-26653][SQL] 
Use Proleptic Gregorian calendar in parsing JDBC lower/upper bounds
URL: https://github.com/apache/spark/pull/23597#discussion_r249815144
 
 

 ##########
 File path: docs/sql-migration-guide-upgrade.md
 ##########
 @@ -43,6 +43,8 @@ displayTitle: Spark SQL Upgrading Guide
 
   - In Spark version 2.4 and earlier, if 
`org.apache.spark.sql.functions.udf(Any, DataType)` gets a Scala closure with 
primitive-type argument, the returned UDF will return null if the input values 
is null. Since Spark 3.0, the UDF will return the default value of the Java 
type if the input value is null. For example, `val f = udf((x: Int) => x, 
IntegerType)`, `f($"x")` will return null in Spark 2.4 and earlier if column 
`x` is null, and return 0 in Spark 3.0. This behavior change is introduced 
because Spark 3.0 is built with Scala 2.12 by default.
 
+  - Since Spark 3.0, the JDBC options `lowerBound` and `upperBound` are 
converted to TimestampType/DateType values in the same way as casting strings 
to TimestampType/DateType values. The conversion is based on Proleptic 
Gregorian calendar, and time zone defined by the SQL config 
`spark.sql.session.timeZone`. In Spark version 2.4 and earlier, the conversion 
is based on the hybrid calendar (Julian + Gregorian) and on default system time 
zone.
 
 Review comment:
   Yea, sounds good.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to