Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16747
Then, It looks okay to me as describing the current state and I just
checked it after building the doc with this, and also
we can already use it as below:
```scala
scala> sql("SELECT interval 1 second").schema(0).dataType.getClass
res0: Class[_ <: org.apache.spark.sql.types.DataType] = class
org.apache.spark.sql.types.CalendarIntervalType$
scala> sql("SELECT interval 1 second").collect()(0).get(0).getClass
res1: Class[_] = class org.apache.spark.unsafe.types.CalendarInterval
```
```scala
scala> val rdd = spark.sparkContext.parallelize(Seq(Row(new
CalendarInterval(0, 0))))
rdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] =
ParallelCollectionRDD[0] at parallelize at <console>:32
scala> spark.createDataFrame(rdd, StructType(StructField("a",
CalendarIntervalType) :: Nil))
res1: org.apache.spark.sql.DataFrame = [a: calendarinterval]
```
Another meta concern is, `org.apache.spark.unsafe.types.CalendarInterval`
seems undocumented in both scaladoc/javadoc (entire `unsafe` module). Once we
document this as a weak promise for this API, then we might have to keep this
for backward compatibility.
Maybe just describe it as SQL dedicated type or not supported for now with
some `Note:` rather than describing it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]