Github user davies commented on the pull request:
https://github.com/apache/spark/pull/7950#issuecomment-130484600
@angelini Thanks for looking into this.
For the first part, I didn't really understand (my poor English, sorry).
What's the conclusion or the things we should do?
For the erased timezone, we have no way to do with it in Python, because
Spark SQL has no way to store the timezone (for efficiency, it use Long as
microseconds in UTC for TimestampType). The test suite just follow this, not a
typo.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]