MaxGekk commented on issue #26134: [SPARK-29486][SQL] CalendarInterval should 
have 3 fields: months, days and microseconds
URL: https://github.com/apache/spark/pull/26134#issuecomment-543298160
 
 
   > I'm really on the fence. @MaxGekk do you have some better ideas?
   
   I would strictly follow the SQL standard and implement SQL `TIMESTAMP 
WITHOUT TZ` and `TIMESTAMP WITH TZ` (with TZ_HOUR:TZ_MINUTE). This could be 
done when `spark.sql.ansi.enabled = true`, or independently from the setting.
   
   For our default settings at the moment `spark.sql.ansi.enabled = false`, I 
would **not** change the session local time zone because we definitely break 
existing user apps. Changing structure of `INTERVAL` type looks reasonable. 
   
   > To be honest I'd like to make Spark follow SQL standard and use TIMESTAMP 
WITHOUT TIMEZONE, but it changes the semantic and we don't know how to deal 
with existing timestamp data.
   
   Spark's `TIMESTAMP` and SQL `TIMESTAMP WITHOUT TIMEZONE` is different types, 
no doubt. I don't think we can smoothly replace one by another one. I imagine 
the types as similar Java 8 classes:
   Spark's `TIMESTAMP` - `java.time.Instant`
   SQL `TIMESTAMP WITHOUT TZ` - `java.time.LocalDateTime`
   SQL `TIMESTAMP WITH TZ` - `java.time.ZonedDateTime`
   
   As Java 8 time API has all 3 types, like that Spark could have all 3.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to