This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 796f9cf  [SPARK-29311][SQL][FOLLOWUP] Add migration guide for 
extracting second from datetimes
796f9cf is described below

commit 796f9cf3fd24141cfc43328ab30b20f7bd659263
Author: Kent Yao <[email protected]>
AuthorDate: Tue Apr 7 07:09:45 2020 +0000

    [SPARK-29311][SQL][FOLLOWUP] Add migration guide for extracting second from 
datetimes
    
    ### What changes were proposed in this pull request?
    
    Add migration guide for extracting second from datetimes
    
    ### Why are the changes needed?
    
    doc the behavior change for extract expression
    
    ### Does this PR introduce any user-facing change?
    
    No
    ### How was this patch tested?
    
    N/A, just passing jenkins
    
    Closes #28140 from yaooqinn/SPARK-29311.
    
    Authored-by: Kent Yao <[email protected]>
    Signed-off-by: Wenchen Fan <[email protected]>
    (cherry picked from commit 3c94a7c8f548d7023c363720fbdd122980905ba8)
    Signed-off-by: Wenchen Fan <[email protected]>
---
 docs/sql-migration-guide.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/docs/sql-migration-guide.md b/docs/sql-migration-guide.md
index ab35e1f..5a2b848 100644
--- a/docs/sql-migration-guide.md
+++ b/docs/sql-migration-guide.md
@@ -285,6 +285,8 @@ license: |
     - `now` - current query start time
   For example `SELECT timestamp 'tomorrow';`.
 
+  - Since Spark 3.0, when using `EXTRACT` expression to extract the second 
field from date/timestamp values, the result will be a `DecimalType(8, 6)` 
value with 2 digits for second part, and 6 digits for the fractional part with 
microsecond precision. e.g. `extract(second from to_timestamp('2019-09-20 
10:10:10.1'))` results `10.100000`.  In Spark version 2.4 and earlier, it 
returns an `IntegerType` value and the result for the former example is `10`.
+
 ### Data Sources
 
   - In Spark version 2.4 and earlier, when reading a Hive Serde table with 
Spark native data sources(parquet/orc), Spark will infer the actual file schema 
and update the table schema in metastore. Since Spark 3.0, Spark doesn't infer 
the schema anymore. This should not cause any problems to end users, but if it 
does, please set `spark.sql.hive.caseSensitiveInferenceMode` to 
`INFER_AND_SAVE`.


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to