Github user airhorns commented on the pull request:

    https://github.com/apache/spark/pull/6250#issuecomment-103507294
  
    I think the ideal case would be supporting timezone aware objects inside 
SparkSQL, but I understand that that is expensive and challenging. See 
https://my.vertica.com/docs/7.1.x/HTML/Content/Authoring/SQLReferenceManual/DataTypes/Date-Time/TIMESTAMP.htm
 for a good description of how Vertica handles timestamps with zones. It stores 
them internally as UTC, but converts back to the timezone specified in the 
schema (if there is one) at query return time. Even if Spark doesn't store and 
re-convert to the local timezone specified in the schema, can we at least make 
a rule that all stuff is stored internally as UTC or something consistent and 
without ambiguity? That way, users can make expectations about what is coming 
out of SparkSQL, and preserve local timezone information if they care. 
    
    Also, what happens to Java/Scala land timezone-aware Calendar objects or 
what have you? Are they converted to local time as well? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to