Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r82066767
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -212,6 +212,14 @@ object SQLConf {
.booleanConf
.createWithDefault(true)
+ val PARQUET_INT64_AS_TIMESTAMP_MILLIS =
--- End diff --
@davies Thanks Davies. I have a couple of questions.
1) If we externalized an config in prior releases, could we just change it
or we need to be backward compatible.
2) I was reading the description and usage of existing config
'spark.sql.parquet.int96AsTimestamp' , it seems that this is applicable for
read where as the new one we have introduced in this PR is applicable for write.
3) Should we change the semantics of the proposed common property control
the write encoding and base reading solely based on the schema metadata i.e
type + original type?
Did you want this change as part of this PR ? Thanks a lot for your input
as always.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]