Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/19607
The remaining discussions in this pr are:
- [ ] Do we need the config
`"spark.sql.execution.pandas.respectSessionTimeZone"`?
- @cloud-fan raised that we don't need the config.
https://github.com/apache/spark/pull/19607#discussion_r149655172
- [ ] What version of Pandas should we support?
- Since old pandas needs a lot of workaround or doesn't support timestamp
values properly, we need to bump up Pandas to support.
- @wesm suggested that we should support only `0.19.2` or upper.
https://github.com/apache/spark/pull/19607#issuecomment-342371522
I'd really appreciate if you left comments on these and please let me know
if I miss something.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]