Github user ueshin commented on the issue:

    https://github.com/apache/spark/pull/18664
  
    In my opinion, we should definitely specify the timezone to keep the 
correct timestamp.
    I'm not sure which is the suitable one yet, but the candidates would be:
    
    1. `"UTC"`
    Spark SQL has timestamp value as the number of micros since `1970-01-01 
00:00:00.0 UTC` internally.
    2. `SQLConf.SESSION_LOCAL_TIMEZONE`
    Spark SQL represents and calculates in timezone related operations using 
this timezone. If there isn't the config value, the value will fallback to 
`DateTimeUtils.defaultTimeZone()`.
    3. `DateTimeUtils.defaultTimeZone()`
    The system timezone.
    
    Hopefully we might specify the timezone when 
`spark.conf.set("spark.sql.execution.arrow.enable", "false")`, too, but it 
would affect backward-compatibility?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to