Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/7226#discussion_r34112187
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala
---
@@ -301,6 +302,16 @@ object CatalystTypeConverters {
DateTimeUtils.toJavaTimestamp(row.getLong(column))
}
+ private object IntervalConverter extends CatalystTypeConverter[Interval,
Interval, Array[Byte]] {
+ override def toCatalystImpl(scalaValue: Interval): Array[Byte] =
--- End diff --
But If we use `Interval` as internal type, maybe we should just use a int
and a long not 12 bytes? It looks to me that keeping an object with int + long
or 12 bytes makes no difference, as object itself has a lot of overhead.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]