Github user deanchen commented on the issue:
https://github.com/apache/spark/pull/13912
@srowen @rxin Would love to see this get merged as this has been a pain
point for us. Not a fan of timezoneless dates as an engineer but the need to
passthrough or write timezoneless dates to csv's has been a necessary task due
to a variety of reasons in the past.
We use Spark to parse large amounts of daily financial asset data with
pre-agreed upon date conventions. Most of the dates we deal doesn't belong to a
timezone and are treated more like indices with the maximum granularity of a
day. The CSV reports we produce for our customers do not ever contain timezone
as a result and almost all dates as passthrough from the original timezoneless
dates.
This is not just a common pattern with financial data, but also common
practice in large companies I've worked at in the past. A large tech company
I've worked at in the past used the convention of MST date and datetime for all
internal systems(database included).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]