Github user squito commented on the issue:
https://github.com/apache/spark/pull/19250
> I think we can follow what Hive/Impala did for interoperability, i.e.
create a config to interpret parquet INT96 as timezone-agnostic timestamp in
parquet reader of Spark.
If I understand what you are asking correctly, I think this is what went
into the original PR:
https://github.com/apache/spark/pull/16781
> However, I'm less sure about the parquet.timezone-adjustment table
property. Is this a standard published somewhere? Do Impala and Hive both
respect it? I think we need people from both Impapa and Hive to say YES to this
proposal.
All three engines were going to make the change, till it was reverted from
Spark. Now the process as a whole is blocked on Spark -- if this change (or
the prior one) is accepted, then the other engines can move forward too.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]