cloud-fan commented on a change in pull request #24041: [SPARK-27119][SQL] Do 
not infer schema when reading Hive serde table with native data source
URL: https://github.com/apache/spark/pull/24041#discussion_r264093913
 
 

 ##########
 File path: docs/sql-migration-guide-upgrade.md
 ##########
 @@ -89,18 +89,20 @@ displayTitle: Spark SQL Upgrading Guide
 
   - Since Spark 3.0, Proleptic Gregorian calendar is used in parsing, 
formatting, and converting dates and timestamps as well as in extracting 
sub-components like years, days and etc. Spark 3.0 uses Java 8 API classes from 
the java.time packages that based on ISO chronology 
(https://docs.oracle.com/javase/8/docs/api/java/time/chrono/IsoChronology.html).
 In Spark version 2.4 and earlier, those operations are performed by using the 
hybrid calendar (Julian + Gregorian, see 
https://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html). 
The changes impact on the results for dates before October 15, 1582 (Gregorian) 
and affect on the following Spark 3.0 API:
 
-    - CSV/JSON datasources use java.time API for parsing and generating 
CSV/JSON content. In Spark version 2.4 and earlier, java.text.SimpleDateFormat 
is used for the same purpose with fallbacks to the parsing mechanisms of Spark 
2.0 and 1.x. For example, `2018-12-08 10:39:21.123` with the pattern 
`yyyy-MM-dd'T'HH:mm:ss.SSS` cannot be parsed since Spark 3.0 because the 
timestamp does not match to the pattern but it can be parsed by earlier Spark 
versions due to a fallback to `Timestamp.valueOf`. To parse the same timestamp 
since Spark 3.0, the pattern should be `yyyy-MM-dd HH:mm:ss.SSS`.
+  - CSV/JSON datasources use java.time API for parsing and generating CSV/JSON 
content. In Spark version 2.4 and earlier, java.text.SimpleDateFormat is used 
for the same purpose with fallbacks to the parsing mechanisms of Spark 2.0 and 
1.x. For example, `2018-12-08 10:39:21.123` with the pattern 
`yyyy-MM-dd'T'HH:mm:ss.SSS` cannot be parsed since Spark 3.0 because the 
timestamp does not match to the pattern but it can be parsed by earlier Spark 
versions due to a fallback to `Timestamp.valueOf`. To parse the same timestamp 
since Spark 3.0, the pattern should be `yyyy-MM-dd HH:mm:ss.SSS`.
 
 Review comment:
   just fix the indentation

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to