This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 8b711115b614 [SPARK-55440][SQL] Types Framework - Phase 1a - Core Type 
System Foundation
8b711115b614 is described below

commit 8b711115b6145387930ec4d455e144b28dd36768
Author: David Milicevic <[email protected]>
AuthorDate: Tue Mar 3 03:49:25 2026 +0800

    [SPARK-55440][SQL] Types Framework - Phase 1a - Core Type System Foundation
    
    ### What changes were proposed in this pull request?
    
    This PR introduces the foundation of the **Spark Types Framework** - a 
system for centralizing type-specific operations that are currently scattered 
across 50+ files using diverse patterns.
    
    **Framework interfaces** (4 files in `sql/api` and `sql/catalyst`):
    - `TypeOps` (catalyst) - mandatory server-side trait consolidating physical 
type representation, literal creation, and external type conversion
    - `TypeApiOps` (sql-api) - mandatory client-side trait consolidating string 
formatting and row encoding
    - `TimeTypeOps` + `TimeTypeApiOps` - proof-of-concept implementation for 
TimeType
    
    All mandatory operations for a type live in a single interface per module 
(`TypeOps` for catalyst, `TypeApiOps` for sql-api). This makes it clear what a 
new type must implement - one mandatory interface per module contains 
everything required. Optional capabilities (e.g., proto serialization, Arrow 
SerDe, JDBC) will be defined as separate traits in subsequent PRs that can be 
mixed in incrementally.
    
    **Integration points** (9 existing files modified):
    - `PhysicalDataType.scala` - physical type dispatch
    - `CatalystTypeConverters.scala` - external/internal type conversion (via 
`TypeOpsConverter` adapter)
    - `ToStringBase.scala` - string formatting
    - `RowEncoder.scala` - row encoding
    - `literals.scala` - default literal creation (`Literal.default`)
    - `EncoderUtils.scala` - encoder Java class mapping
    - `CodeGenerator.scala` - codegen Java class mapping
    - `SpecificInternalRow.scala` - mutable value creation
    - `InternalRow.scala` - row writer dispatch
    
    **Feature flag**: `spark.sql.types.framework.enabled` (defaults to `true` 
in tests via `Utils.isTesting`, `false` otherwise), configured in 
`SQLConf.scala` + `SqlApiConf.scala`.
    
    **Factory design:** `TypeOps.apply(dt)` returns `Option[TypeOps]`, serving 
as both lookup and existence check. The feature flag is checked inside 
`apply()`, so callers don't need to check it separately. Integration points use 
`getOrElse` to fall through to legacy handling:
    
    ```scala
    def someOperation(dt: DataType) =
      TypeOps(dt).map(_.someMethod()).getOrElse {
        dt match {
          // Legacy types (unchanged)
          case DateType => ...
          case TimestampType => ...
        }
      }
    ```
    
    The split across `sql/api` and `sql/catalyst` follows existing Spark module 
separation - `TypeApiOps` lives in `sql/api` for client-side operations that 
depend on `AgnosticEncoder`, while `TypeOps` lives in `sql/catalyst` for 
server-side operations that depend on `InternalRow`, `PhysicalDataType`, etc.
    
    This is the first of several planned PRs. Subsequent PRs will add 
client-side integrations (Spark Connect proto, Arrow SerDe, JDBC, Python, 
Thrift) and storage format integrations (Parquet, ORC, CSV, JSON, etc.).
    
    ### Why are the changes needed?
    
    Adding a new data type to Spark currently requires modifying **50+ files** 
with scattered type-specific logic. Each file has its own conventions, and 
there is no compiler assistance to ensure completeness. Integration points are 
non-obvious and easy to miss - patterns include `_: TimeType` in Scala pattern 
matching, `TimeNanoVector` in Arrow SerDe, `.hasTime()`/`.getTime()` in proto 
fields, `LocalTimeEncoder` in encoder helpers, `java.sql.Types.TIME` in JDBC, 
`instanceof TimeType` in  [...]
    
    The framework centralizes type-specific infrastructure operations in Ops 
interface classes. When adding a new type with the framework in place, a 
developer creates two Ops classes (one in `sql/api`, one in `sql/catalyst`) and 
registers them in the corresponding factory objects. The compiler enforces that 
all required interface methods are implemented, significantly reducing the risk 
of missing integration points.
    
    **Concrete example - TimeType:** TimeType has integration points spread 
across 50+ files using the diverse patterns listed above (physical type 
mapping, literals, type converters, encoders, formatters, Arrow SerDe, proto 
conversion, JDBC, Python, Thrift, storage formats). With the framework, these 
are consolidated into two Ops classes: `TimeTypeOps` (~80 lines) and 
`TimeTypeApiOps` (~60 lines). A developer adding a new type with similar 
complexity would create two analogous files inst [...]
    
    This PR covers the core infrastructure integration. Subsequent PRs will add 
client-side integrations (Spark Connect proto, Arrow SerDe, JDBC, Python, 
Thrift) and storage format integrations (Parquet, ORC, CSV, JSON, etc.).
    
    ### Does this PR introduce _any_ user-facing change?
    
    No. This is an internal refactoring behind a feature flag 
(`spark.sql.types.framework.enabled`). When the flag is enabled, 
framework-supported types use centralized Ops dispatch instead of direct 
pattern matching. Behavior is identical in both paths. The flag defaults to 
`true` in tests and `false` otherwise.
    
    ### How was this patch tested?
    
    The framework is a refactoring of existing dispatch logic - it changes the 
mechanism but preserves identical behavior. The feature flag is enabled by 
default in test environments (`Utils.isTesting`), so the entire existing test 
suite validates the framework code path. No new tests are added in this PR 
because the framework delegates to the same underlying logic that existing 
tests already cover.
    
    In subsequent phases, the testing focus will be on:
    1. Testing the framework itself (Ops interface contracts, roundtrip 
correctness, edge cases)
    2. Designing a generalized testing mechanism that enforces proper test 
coverage for each type added through the framework
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    Co-authored with: claude-opus-4-6
    
    Closes #54223 from davidm-db/davidm-db/types_framework.
    
    Authored-by: David Milicevic <[email protected]>
    Signed-off-by: Wenchen Fan <[email protected]>
---
 .../src/main/scala/org/apache/spark/sql/Row.scala  |  13 +-
 .../spark/sql/catalyst/encoders/RowEncoder.scala   |   8 +
 .../org/apache/spark/sql/internal/SqlApiConf.scala |   2 +
 .../spark/sql/types/ops/TimeTypeApiOps.scala       |  59 ++++++
 .../apache/spark/sql/types/ops/TypeApiOps.scala    | 126 ++++++++++++
 .../sql/catalyst/CatalystTypeConverters.scala      |  19 ++
 .../apache/spark/sql/catalyst/InternalRow.scala    |   8 +-
 .../spark/sql/catalyst/encoders/EncoderUtils.scala |   6 +-
 .../catalyst/expressions/SpecificInternalRow.scala |  12 +-
 .../sql/catalyst/expressions/ToStringBase.scala    |   8 +-
 .../expressions/codegen/CodeGenerator.scala        |   7 +-
 .../spark/sql/catalyst/expressions/literals.scala  |   6 +-
 .../sql/catalyst/types/PhysicalDataType.scala      |   6 +-
 .../spark/sql/catalyst/types/ops/TimeTypeOps.scala |  84 ++++++++
 .../spark/sql/catalyst/types/ops/TypeOps.scala     | 217 +++++++++++++++++++++
 .../org/apache/spark/sql/internal/SQLConf.scala    |  12 ++
 16 files changed, 579 insertions(+), 14 deletions(-)

diff --git a/sql/api/src/main/scala/org/apache/spark/sql/Row.scala 
b/sql/api/src/main/scala/org/apache/spark/sql/Row.scala
index 9b765946561e..3a0e4d45f937 100644
--- a/sql/api/src/main/scala/org/apache/spark/sql/Row.scala
+++ b/sql/api/src/main/scala/org/apache/spark/sql/Row.scala
@@ -37,6 +37,7 @@ import org.apache.spark.sql.errors.DataTypeErrors
 import org.apache.spark.sql.errors.DataTypeErrors.{toSQLType, toSQLValue}
 import org.apache.spark.sql.internal.SqlApiConf
 import org.apache.spark.sql.types._
+import org.apache.spark.sql.types.ops.TypeApiOps
 import org.apache.spark.unsafe.types.CalendarInterval
 import org.apache.spark.util.ArrayImplicits._
 
@@ -627,8 +628,16 @@ trait Row extends Serializable {
     }
 
     // Convert a value to json.
-    def toJson(value: Any, dataType: DataType): JValue = (value, dataType) 
match {
-      case (null, _) => JNull
+    def toJson(value: Any, dataType: DataType): JValue =
+      if (value == null) {
+        JNull
+      } else {
+        TypeApiOps(dataType)
+          .map(ops => JString(ops.format(value)))
+          .getOrElse(toJsonDefault(value, dataType))
+      }
+
+    def toJsonDefault(value: Any, dataType: DataType): JValue = (value, 
dataType) match {
       case (b: Boolean, _) => JBool(b)
       case (b: Byte, _) => JLong(b)
       case (s: Short, _) => JLong(s)
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/encoders/RowEncoder.scala
 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/encoders/RowEncoder.scala
index 600b49536aec..bad673672188 100644
--- 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/encoders/RowEncoder.scala
+++ 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/encoders/RowEncoder.scala
@@ -25,6 +25,7 @@ import 
org.apache.spark.sql.catalyst.encoders.AgnosticEncoders.{BinaryEncoder, B
 import org.apache.spark.sql.errors.DataTypeErrorsBase
 import org.apache.spark.sql.internal.SqlApiConf
 import org.apache.spark.sql.types._
+import org.apache.spark.sql.types.ops.TypeApiOps
 import org.apache.spark.util.ArrayImplicits._
 
 /**
@@ -70,6 +71,13 @@ object RowEncoder extends DataTypeErrorsBase {
   }
 
   private[sql] def encoderForDataType(dataType: DataType, lenient: Boolean): 
AgnosticEncoder[_] =
+    TypeApiOps(dataType)
+      .map(_.getEncoder)
+      .getOrElse(encoderForDataTypeDefault(dataType, lenient))
+
+  private def encoderForDataTypeDefault(
+      dataType: DataType,
+      lenient: Boolean): AgnosticEncoder[_] =
     dataType match {
       case NullType => NullEncoder
       case BooleanType => BoxedBooleanEncoder
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/internal/SqlApiConf.scala 
b/sql/api/src/main/scala/org/apache/spark/sql/internal/SqlApiConf.scala
index 2e2105c852e6..bedd4afe0ed5 100644
--- a/sql/api/src/main/scala/org/apache/spark/sql/internal/SqlApiConf.scala
+++ b/sql/api/src/main/scala/org/apache/spark/sql/internal/SqlApiConf.scala
@@ -53,6 +53,7 @@ private[sql] trait SqlApiConf {
   def parserDfaCacheFlushRatio: Double
   def legacyParameterSubstitutionConstantsOnly: Boolean
   def legacyIdentifierClauseOnly: Boolean
+  def typesFrameworkEnabled: Boolean
 }
 
 private[sql] object SqlApiConf {
@@ -110,4 +111,5 @@ private[sql] object DefaultSqlApiConf extends SqlApiConf {
   override def parserDfaCacheFlushRatio: Double = -1.0
   override def legacyParameterSubstitutionConstantsOnly: Boolean = false
   override def legacyIdentifierClauseOnly: Boolean = false
+  override def typesFrameworkEnabled: Boolean = false
 }
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/types/ops/TimeTypeApiOps.scala 
b/sql/api/src/main/scala/org/apache/spark/sql/types/ops/TimeTypeApiOps.scala
new file mode 100644
index 000000000000..581ffffff2f9
--- /dev/null
+++ b/sql/api/src/main/scala/org/apache/spark/sql/types/ops/TimeTypeApiOps.scala
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.types.ops
+
+import org.apache.spark.sql.catalyst.encoders.AgnosticEncoder
+import org.apache.spark.sql.catalyst.encoders.AgnosticEncoders.LocalTimeEncoder
+import org.apache.spark.sql.catalyst.util.{FractionTimeFormatter, 
TimeFormatter}
+import org.apache.spark.sql.types.{DataType, TimeType}
+
+/**
+ * Client-side (spark-api) operations for TimeType.
+ *
+ * This class implements all TypeApiOps methods for the TIME data type:
+ *   - String formatting: uses FractionTimeFormatter for consistent output
+ *   - Row encoding: uses LocalTimeEncoder for java.time.LocalTime
+ *
+ * RELATIONSHIP TO TimeTypeOps: TimeTypeOps (in catalyst package) extends this 
class to inherit
+ * client-side operations while adding server-side operations (physical type, 
literals, etc.).
+ *
+ * @param t
+ *   The TimeType with precision information
+ * @since 4.2.0
+ */
+class TimeTypeApiOps(val t: TimeType) extends TypeApiOps {
+
+  override def dataType: DataType = t
+
+  // ==================== String Formatting ====================
+
+  @transient
+  private lazy val timeFormatter: TimeFormatter = new FractionTimeFormatter()
+
+  override def format(v: Any): String = {
+    timeFormatter.format(v.asInstanceOf[Long])
+  }
+
+  override def toSQLValue(v: Any): String = {
+    s"TIME '${format(v)}'"
+  }
+
+  // ==================== Row Encoding ====================
+
+  override def getEncoder: AgnosticEncoder[_] = LocalTimeEncoder
+}
diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/types/ops/TypeApiOps.scala 
b/sql/api/src/main/scala/org/apache/spark/sql/types/ops/TypeApiOps.scala
new file mode 100644
index 000000000000..f16e8fbc3b55
--- /dev/null
+++ b/sql/api/src/main/scala/org/apache/spark/sql/types/ops/TypeApiOps.scala
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.types.ops
+
+import org.apache.spark.sql.catalyst.encoders.AgnosticEncoder
+import org.apache.spark.sql.internal.SqlApiConf
+import org.apache.spark.sql.types.{DataType, TimeType}
+import org.apache.spark.unsafe.types.UTF8String
+
+/**
+ * Client-side (spark-api) type operations for the Types Framework.
+ *
+ * This trait consolidates all client-side operations that a data type must 
implement to be usable
+ * in the Spark SQL API layer. All methods are mandatory because a type cannot 
function correctly
+ * without string formatting (needed for CAST to STRING, EXPLAIN, SHOW) or 
encoding (needed for
+ * Dataset[T] operations).
+ *
+ * This single-interface design was chosen over separate 
FormatTypeOps/EncodeTypeOps traits to
+ * make it clear what a new type must implement - there is one mandatory 
interface, and it
+ * contains everything required. Optional capabilities (e.g., proto, Arrow, 
JDBC) are defined as
+ * separate traits that can be mixed in incrementally.
+ *
+ * RELATIONSHIP TO TypeOps:
+ *   - TypeOps (catalyst): Server-side operations - physical types, literals, 
conversions
+ *   - TypeApiOps (spark-api): Client-side operations - formatting, encoding
+ *
+ * The split exists because sql/api cannot depend on sql/catalyst. For 
TimeType, TimeTypeOps
+ * (catalyst) extends TimeTypeApiOps (sql-api) to inherit both sets of 
operations.
+ *
+ * @see
+ *   TimeTypeApiOps for reference implementation
+ * @since 4.2.0
+ */
+trait TypeApiOps extends Serializable {
+
+  /** The DataType this Ops instance handles. */
+  def dataType: DataType
+
+  // ==================== String Formatting ====================
+
+  /**
+   * Formats an internal value as a display string.
+   *
+   * Used by CAST to STRING, EXPLAIN output, SHOW commands.
+   *
+   * @param v
+   *   the internal value (e.g., Long nanoseconds for TimeType)
+   * @return
+   *   formatted string (e.g., "10:30:45.123456")
+   */
+  def format(v: Any): String
+
+  /**
+   * Formats an internal value as a UTF8String.
+   *
+   * Default implementation wraps format(). Override for performance if needed.
+   */
+  def formatUTF8(v: Any): UTF8String = UTF8String.fromString(format(v))
+
+  /**
+   * Formats an internal value as a SQL literal string.
+   *
+   * @param v
+   *   the internal value
+   * @return
+   *   SQL literal string (e.g., "TIME '10:30:00'")
+   */
+  def toSQLValue(v: Any): String
+
+  // ==================== Row Encoding ====================
+
+  /**
+   * Returns the AgnosticEncoder for this type.
+   *
+   * Used by RowEncoder for Dataset[T] operations.
+   *
+   * @return
+   *   AgnosticEncoder instance (e.g., LocalTimeEncoder for TimeType)
+   */
+  def getEncoder: AgnosticEncoder[_]
+}
+
+/**
+ * Factory object for creating TypeApiOps instances.
+ *
+ * Returns Option to serve as both lookup and existence check - callers use 
getOrElse to fall
+ * through to legacy handling. The feature flag check is inside apply(), so 
callers don't need to
+ * check it separately.
+ */
+object TypeApiOps {
+
+  /**
+   * Returns a TypeApiOps instance for the given DataType, if supported by the 
framework.
+   *
+   * Returns None if the type is not supported or the framework is disabled. 
This is the single
+   * registration point for all client-side type operations.
+   *
+   * @param dt
+   *   the DataType to get operations for
+   * @return
+   *   Some(TypeApiOps) if supported, None otherwise
+   */
+  def apply(dt: DataType): Option[TypeApiOps] = {
+    if (!SqlApiConf.get.typesFrameworkEnabled) return None
+    dt match {
+      case tt: TimeType => Some(new TimeTypeApiOps(tt))
+      // Add new types here - single registration point
+      case _ => None
+    }
+  }
+}
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala
index f8612fa3cbfc..d51007e7d336 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala
@@ -30,6 +30,7 @@ import scala.language.existentials
 import org.apache.spark.SparkIllegalArgumentException
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.catalyst.types.ops.TypeOps
 import org.apache.spark.sql.catalyst.util._
 import org.apache.spark.sql.internal.SQLConf
 import org.apache.spark.sql.types._
@@ -62,6 +63,13 @@ object CatalystTypeConverters {
 
   private def getConverterForType(dataType: DataType): 
CatalystTypeConverter[Any, Any, Any] = {
     TypeUtils.failUnsupportedDataType(dataType, SQLConf.get)
+    TypeOps(dataType)
+      .map(ops => new TypeOpsConverter(ops))
+      .getOrElse(getConverterForTypeDefault(dataType))
+  }
+
+  private def getConverterForTypeDefault(
+      dataType: DataType): CatalystTypeConverter[Any, Any, Any] = {
     val converter = dataType match {
       case udt: UserDefinedType[_] => UDTConverter(udt)
       case arrayType: ArrayType => ArrayConverter(arrayType.elementType)
@@ -150,6 +158,17 @@ object CatalystTypeConverters {
     override def toScalaImpl(row: InternalRow, column: Int): Any = 
row.get(column, dataType)
   }
 
+  /**
+   * Adapter that wraps TypeOps to implement CatalystTypeConverter.
+   * Used by the Types Framework to provide type conversion for 
framework-supported types.
+   */
+  private class TypeOpsConverter(ops: TypeOps)
+      extends CatalystTypeConverter[Any, Any, Any] {
+    override def toCatalystImpl(scalaValue: Any): Any = 
ops.toCatalystImpl(scalaValue)
+    override def toScala(catalystValue: Any): Any = ops.toScala(catalystValue)
+    override def toScalaImpl(row: InternalRow, column: Int): Any = 
ops.toScalaImpl(row, column)
+  }
+
   private case class UDTConverter[A >: Null](
       udt: UserDefinedType[A]) extends CatalystTypeConverter[A, A, Any] {
     // toCatalyst (it calls toCatalystImpl) will do null check.
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/InternalRow.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/InternalRow.scala
index f9bf0ebdfd9a..b27283cb3f64 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/InternalRow.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/InternalRow.scala
@@ -19,6 +19,7 @@ package org.apache.spark.sql.catalyst
 
 import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.catalyst.types._
+import org.apache.spark.sql.catalyst.types.ops.TypeOps
 import org.apache.spark.sql.catalyst.util.{ArrayData, MapData}
 import org.apache.spark.sql.types._
 import org.apache.spark.unsafe.types.{CalendarInterval, UTF8String}
@@ -168,8 +169,11 @@ object InternalRow {
   /**
    * Returns a writer for an `InternalRow` with given data type.
    */
-  @scala.annotation.tailrec
-  def getWriter(ordinal: Int, dt: DataType): (InternalRow, Any) => Unit = dt 
match {
+  def getWriter(ordinal: Int, dt: DataType): (InternalRow, Any) => Unit =
+    
TypeOps(dt).map(_.getRowWriter(ordinal)).getOrElse(getWriterDefault(ordinal, 
dt))
+
+  private def getWriterDefault(
+      ordinal: Int, dt: DataType): (InternalRow, Any) => Unit = dt match {
     case BooleanType => (input, v) => input.setBoolean(ordinal, 
v.asInstanceOf[Boolean])
     case ByteType => (input, v) => input.setByte(ordinal, v.asInstanceOf[Byte])
     case ShortType => (input, v) => input.setShort(ordinal, 
v.asInstanceOf[Short])
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/EncoderUtils.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/EncoderUtils.scala
index e7b53344abbd..0fce96c15997 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/EncoderUtils.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/EncoderUtils.scala
@@ -23,6 +23,7 @@ import org.apache.spark.sql.catalyst.InternalRow
 import org.apache.spark.sql.catalyst.encoders.AgnosticEncoders.{BinaryEncoder, 
CalendarIntervalEncoder, NullEncoder, PrimitiveBooleanEncoder, 
PrimitiveByteEncoder, PrimitiveDoubleEncoder, PrimitiveFloatEncoder, 
PrimitiveIntEncoder, PrimitiveLongEncoder, PrimitiveShortEncoder, 
SparkDecimalEncoder, VariantEncoder}
 import org.apache.spark.sql.catalyst.expressions.Expression
 import org.apache.spark.sql.catalyst.types.{PhysicalBinaryType, 
PhysicalIntegerType, PhysicalLongType}
+import org.apache.spark.sql.catalyst.types.ops.TypeOps
 import org.apache.spark.sql.catalyst.util.{ArrayData, MapData}
 import org.apache.spark.sql.types.{ArrayType, BinaryType, BooleanType, 
ByteType, CalendarIntervalType, DataType, DateType, DayTimeIntervalType, 
Decimal, DecimalType, DoubleType, FloatType, GeographyType, GeometryType, 
IntegerType, LongType, MapType, ObjectType, ShortType, StringType, StructType, 
TimestampNTZType, TimestampType, TimeType, UserDefinedType, VariantType, 
YearMonthIntervalType}
 import org.apache.spark.unsafe.types.{CalendarInterval, GeographyVal, 
GeometryVal, UTF8String, VariantVal}
@@ -97,7 +98,10 @@ object EncoderUtils {
     case _ => false
   }
 
-  def dataTypeJavaClass(dt: DataType): Class[_] = {
+  def dataTypeJavaClass(dt: DataType): Class[_] =
+    TypeOps(dt).map(_.getJavaClass).getOrElse(dataTypeJavaClassDefault(dt))
+
+  private def dataTypeJavaClassDefault(dt: DataType): Class[_] = {
     dt match {
       case _: DecimalType => classOf[Decimal]
       case _: DayTimeIntervalType => classOf[PhysicalLongType.InternalType]
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SpecificInternalRow.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SpecificInternalRow.scala
index 1f755df0516f..a5b8d0857c99 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SpecificInternalRow.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SpecificInternalRow.scala
@@ -17,8 +17,7 @@
 
 package org.apache.spark.sql.catalyst.expressions
 
-import scala.annotation.tailrec
-
+import org.apache.spark.sql.catalyst.types.ops.TypeOps
 import org.apache.spark.sql.types._
 
 /**
@@ -194,8 +193,13 @@ final class MutableAny extends MutableValue {
  */
 final class SpecificInternalRow(val values: Array[MutableValue]) extends 
BaseGenericInternalRow {
 
-  @tailrec
-  private[this] def dataTypeToMutableValue(dataType: DataType): MutableValue = 
dataType match {
+  private[this] def dataTypeToMutableValue(dataType: DataType): MutableValue =
+    TypeOps(dataType)
+      .map(_.getMutableValue)
+      .getOrElse(dataTypeToMutableValueDefault(dataType))
+
+  private[this] def dataTypeToMutableValueDefault(
+      dataType: DataType): MutableValue = dataType match {
     // We use INT for DATE and YearMonthIntervalType internally
     case IntegerType | DateType | _: YearMonthIntervalType => new MutableInt
     // We use Long for Timestamp, Timestamp without time zone and 
DayTimeInterval internally
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ToStringBase.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ToStringBase.scala
index bc294fd722b3..04052dafb61a 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ToStringBase.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ToStringBase.scala
@@ -27,6 +27,7 @@ import 
org.apache.spark.sql.catalyst.util.IntervalStringStyles.ANSI_STYLE
 import org.apache.spark.sql.internal.SQLConf
 import org.apache.spark.sql.internal.SQLConf.BinaryOutputStyle
 import org.apache.spark.sql.types._
+import org.apache.spark.sql.types.ops.TypeApiOps
 import org.apache.spark.unsafe.UTF8StringBuilder
 import org.apache.spark.unsafe.types.{CalendarInterval, UTF8String}
 import org.apache.spark.util.ArrayImplicits._
@@ -65,7 +66,12 @@ trait ToStringBase { self: UnaryExpression with 
TimeZoneAwareExpression =>
       case NoConstraint => castToString(from)
     }
 
-  private def castToString(from: DataType): Any => UTF8String = from match {
+  private def castToString(from: DataType): Any => UTF8String =
+    TypeApiOps(from)
+      .map(ops => acceptAny[Any](v => ops.formatUTF8(v)))
+      .getOrElse(castToStringDefault(from))
+
+  private def castToStringDefault(from: DataType): Any => UTF8String = from 
match {
     case CalendarIntervalType =>
       acceptAny[CalendarInterval](i => UTF8String.fromString(i.toString))
     case BinaryType => acceptAny[Array[Byte]](binaryFormatter.apply)
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
index 13b1d329f7ec..080186431eb7 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
@@ -41,6 +41,7 @@ import 
org.apache.spark.sql.catalyst.encoders.HashableWeakReference
 import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.catalyst.expressions.codegen.Block._
 import org.apache.spark.sql.catalyst.types._
+import org.apache.spark.sql.catalyst.types.ops.TypeOps
 import org.apache.spark.sql.catalyst.util.{ArrayData, 
CollationAwareUTF8String, CollationFactory, CollationSupport, MapData, 
SQLOrderingUtil, UnsafeRowUtils}
 import org.apache.spark.sql.catalyst.util.DateTimeConstants.NANOS_PER_MILLIS
 import org.apache.spark.sql.errors.QueryExecutionErrors
@@ -1989,8 +1990,10 @@ object CodeGenerator extends Logging {
     }
   }
 
-  @tailrec
-  def javaClass(dt: DataType): Class[_] = dt match {
+  def javaClass(dt: DataType): Class[_] =
+    TypeOps(dt).map(_.getJavaClass).getOrElse(javaClassDefault(dt))
+
+  private def javaClassDefault(dt: DataType): Class[_] = dt match {
     case BooleanType => java.lang.Boolean.TYPE
     case ByteType => java.lang.Byte.TYPE
     case ShortType => java.lang.Short.TYPE
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
index 6448194f9705..d9f4f01877d5 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
@@ -47,6 +47,7 @@ import org.apache.spark.sql.catalyst.parser.CatalystSqlParser
 import org.apache.spark.sql.catalyst.trees.TreePattern
 import org.apache.spark.sql.catalyst.trees.TreePattern.{LITERAL, NULL_LITERAL, 
TRUE_OR_FALSE_LITERAL}
 import org.apache.spark.sql.catalyst.types._
+import org.apache.spark.sql.catalyst.types.ops.TypeOps
 import org.apache.spark.sql.catalyst.util._
 import org.apache.spark.sql.catalyst.util.DateTimeUtils.{instantToMicros, 
localTimeToNanos}
 import org.apache.spark.sql.catalyst.util.IntervalStringStyles.ANSI_STYLE
@@ -186,7 +187,10 @@ object Literal {
   /**
    * Create a literal with default value for given DataType
    */
-  def default(dataType: DataType): Literal = dataType match {
+  def default(dataType: DataType): Literal =
+    
TypeOps(dataType).map(_.getDefaultLiteral).getOrElse(defaultDefault(dataType))
+
+  private def defaultDefault(dataType: DataType): Literal = dataType match {
     case NullType => create(null, NullType)
     case BooleanType => Literal(false)
     case ByteType => Literal(0.toByte)
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala
index 2f2b91e0b969..6f49b3998652 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/PhysicalDataType.scala
@@ -21,6 +21,7 @@ import scala.reflect.runtime.universe.TypeTag
 import scala.reflect.runtime.universe.typeTag
 
 import org.apache.spark.sql.catalyst.expressions.{Ascending, BoundReference, 
InterpretedOrdering, SortOrder}
+import org.apache.spark.sql.catalyst.types.ops.TypeOps
 import org.apache.spark.sql.catalyst.util.{ArrayData, CollationFactory, 
MapData, SQLOrderingUtil}
 import org.apache.spark.sql.errors.QueryExecutionErrors
 import org.apache.spark.sql.types.{ArrayType, BinaryType, BooleanType, 
ByteExactNumeric, ByteType, CalendarIntervalType, CharType, DataType, DateType, 
DayTimeIntervalType, Decimal, DecimalExactNumeric, DecimalType, 
DoubleExactNumeric, DoubleType, FloatExactNumeric, FloatType, FractionalType, 
GeographyType, GeometryType, IntegerExactNumeric, IntegerType, IntegralType, 
LongExactNumeric, LongType, MapType, NullType, NumericType, ShortExactNumeric, 
ShortType, StringType, StructField, StructT [...]
@@ -34,7 +35,10 @@ sealed abstract class PhysicalDataType {
 }
 
 object PhysicalDataType {
-  def apply(dt: DataType): PhysicalDataType = dt match {
+  def apply(dt: DataType): PhysicalDataType =
+    TypeOps(dt).map(_.getPhysicalType).getOrElse(applyDefault(dt))
+
+  private def applyDefault(dt: DataType): PhysicalDataType = dt match {
     case NullType => PhysicalNullType
     case ByteType => PhysicalByteType
     case ShortType => PhysicalShortType
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/ops/TimeTypeOps.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/ops/TimeTypeOps.scala
new file mode 100644
index 000000000000..74198c956edc
--- /dev/null
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/ops/TimeTypeOps.scala
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.types.ops
+
+import java.time.LocalTime
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.{Literal, MutableLong, 
MutableValue}
+import org.apache.spark.sql.catalyst.types.{PhysicalDataType, PhysicalLongType}
+import org.apache.spark.sql.catalyst.util.DateTimeUtils
+import org.apache.spark.sql.types.TimeType
+import org.apache.spark.sql.types.ops.TimeTypeApiOps
+
+/**
+ * Server-side (catalyst) operations for TimeType.
+ *
+ * This class implements all TypeOps methods for the TIME data type, providing:
+ *   - Physical type: PhysicalLongType (nanoseconds since midnight)
+ *   - Literals: default is 0L (midnight)
+ *   - External conversion: java.time.LocalTime <-> Long nanoseconds
+ *
+ * It also inherits client-side operations from TimeTypeApiOps:
+ *   - String formatting (FractionTimeFormatter)
+ *   - Row encoding (LocalTimeEncoder)
+ *
+ * INTERNAL REPRESENTATION:
+ *   - Values stored as Long nanoseconds since midnight
+ *   - Range: 0 to 86,399,999,999,999
+ *   - External type: java.time.LocalTime
+ *   - Precision (0-6) affects display only, not storage
+ *
+ * @param t
+ *   The TimeType with precision information
+ * @since 4.2.0
+ */
+case class TimeTypeOps(override val t: TimeType) extends TimeTypeApiOps(t) 
with TypeOps {
+
+  // ==================== Physical Type Representation ====================
+
+  override def getPhysicalType: PhysicalDataType = PhysicalLongType
+
+  override def getJavaClass: Class[_] = classOf[Long]
+
+  override def getMutableValue: MutableValue = new MutableLong
+
+  override def getRowWriter(ordinal: Int): (InternalRow, Any) => Unit =
+    (input, v) => input.setLong(ordinal, v.asInstanceOf[Long])
+
+  // ==================== Literal Creation ====================
+
+  override def getDefaultLiteral: Literal = Literal.create(0L, t)
+
+  override def getJavaLiteral(v: Any): String = s"${v}L"
+
+  // ==================== External Type Conversion ====================
+
+  override def toCatalystImpl(scalaValue: Any): Any = {
+    DateTimeUtils.localTimeToNanos(scalaValue.asInstanceOf[LocalTime])
+  }
+
+  override def toScala(catalystValue: Any): Any = {
+    if (catalystValue == null) null
+    else DateTimeUtils.nanosToLocalTime(catalystValue.asInstanceOf[Long])
+  }
+
+  override def toScalaImpl(row: InternalRow, column: Int): Any = {
+    DateTimeUtils.nanosToLocalTime(row.getLong(column))
+  }
+}
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/ops/TypeOps.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/ops/TypeOps.scala
new file mode 100644
index 000000000000..628dfe941407
--- /dev/null
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/types/ops/TypeOps.scala
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.types.ops
+
+import javax.annotation.Nullable
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.{Literal, MutableValue}
+import org.apache.spark.sql.catalyst.types.PhysicalDataType
+import org.apache.spark.sql.internal.SQLConf
+import org.apache.spark.sql.types.{DataType, TimeType}
+
+/**
+ * Server-side (catalyst) type operations for the Types Framework.
+ *
+ * This trait consolidates all server-side operations that a data type must 
implement to function in
+ * the Spark SQL engine. All methods are mandatory because without any of them 
the type would fail
+ * at runtime - physical type mapping is needed for storage, literals for the 
optimizer, and
+ * external type conversion for user-facing operations like collect() and UDFs.
+ *
+ * This single-interface design was chosen over separate 
PhyTypeOps/LiteralTypeOps/ExternalTypeOps
+ * traits to make it clear what a new type must implement. There is one 
mandatory interface with
+ * everything required. Optional capabilities (e.g., proto serialization, 
client integration) are
+ * defined as separate traits that can be mixed in incrementally as a type's 
support expands.
+ *
+ * USAGE - integration points use TypeOps(dt) which returns Option[TypeOps]:
+ * {{{
+ * def getPhysicalType(dt: DataType): PhysicalDataType =
+ *   TypeOps(dt).map(_.getPhysicalType).getOrElse {
+ *     dt match {
+ *       case DateType => PhysicalIntegerType
+ *       // ... legacy types
+ *     }
+ *   }
+ * }}}
+ *
+ * IMPLEMENTATION - to add a new type to the framework:
+ *   1. Create a case class extending TypeOps (and optionally TypeApiOps for 
client-side ops)
+ *   2. Register it in TypeOps.apply() below - single registration point
+ *   3. No other file modifications needed - all integration points 
automatically work
+ *
+ * @see
+ *   TimeTypeOps for a reference implementation
+ * @since 4.2.0
+ */
+trait TypeOps extends Serializable {
+
+  /** The DataType this Ops instance handles. */
+  def dataType: DataType
+
+  // ==================== Physical Type Representation ====================
+
+  /**
+   * Returns the physical data type representation.
+   *
+   * Determines how values are stored in memory and accessed from InternalRow.
+   *
+   * @return
+   *   PhysicalDataType (e.g., PhysicalLongType for TimeType)
+   */
+  def getPhysicalType: PhysicalDataType
+
+  /**
+   * Returns the Java class used for code generation.
+   *
+   * @return
+   *   Java class (e.g., classOf[Long] for TimeType)
+   */
+  def getJavaClass: Class[_]
+
+  /**
+   * Returns a MutableValue instance for use in SpecificInternalRow.
+   *
+   * @return
+   *   MutableValue instance (e.g., MutableLong for TimeType)
+   */
+  def getMutableValue: MutableValue
+
+  /**
+   * Returns a writer function for setting values in an InternalRow.
+   *
+   * @param ordinal
+   *   the column index to write to
+   * @return
+   *   writer function (InternalRow, Any) => Unit
+   */
+  def getRowWriter(ordinal: Int): (InternalRow, Any) => Unit
+
+  // ==================== Literal Creation ====================
+
+  /**
+   * Returns the default literal value for this type.
+   *
+   * Used by Literal.default() for ALTER TABLE ADD COLUMN, optimizer, etc.
+   *
+   * @return
+   *   Literal with the default value and correct type
+   */
+  def getDefaultLiteral: Literal
+
+  /**
+   * Returns the Java literal representation for code generation.
+   *
+   * @param v
+   *   the internal value to represent
+   * @return
+   *   Java literal string (e.g., "37800000000000L")
+   */
+  def getJavaLiteral(v: Any): String
+
+  // ==================== External Type Conversion ====================
+
+  /**
+   * Converts an external (Scala/Java) value to its internal Catalyst 
representation.
+   *
+   * Handles null checking and Option unwrapping automatically.
+   *
+   * @param maybeScalaValue
+   *   the external value (may be null or Option)
+   * @return
+   *   the internal representation, or null if input was null/None
+   */
+  final def toCatalyst(@Nullable maybeScalaValue: Any): Any = {
+    maybeScalaValue match {
+      case null | None => null
+      case opt: Some[_] => toCatalystImpl(opt.get)
+      case other => toCatalystImpl(other)
+    }
+  }
+
+  /**
+   * Converts a non-null external value to its internal representation.
+   *
+   * @param scalaValue
+   *   the external value (guaranteed non-null)
+   * @return
+   *   the internal Catalyst representation
+   */
+  def toCatalystImpl(scalaValue: Any): Any
+
+  /**
+   * Converts an internal Catalyst value to its external representation.
+   *
+   * @param catalystValue
+   *   the internal value (may be null)
+   * @return
+   *   the external representation, or null if input was null
+   */
+  def toScala(@Nullable catalystValue: Any): Any
+
+  /**
+   * Extracts a value from an InternalRow and converts to external 
representation.
+   *
+   * @param row
+   *   the InternalRow containing the value
+   * @param column
+   *   the column index
+   * @return
+   *   the external representation
+   */
+  def toScalaImpl(row: InternalRow, column: Int): Any
+
+  /**
+   * Extracts a value from an InternalRow with null checking.
+   */
+  final def toScala(row: InternalRow, column: Int): Any = {
+    if (row.isNullAt(column)) null else toScalaImpl(row, column)
+  }
+}
+
+/**
+ * Factory object for creating TypeOps instances.
+ *
+ * Returns Option to serve as both lookup and existence check - callers use 
getOrElse to fall
+ * through to legacy handling. The feature flag check is inside apply(), so 
callers don't need to
+ * check it separately.
+ *
+ * Uses pattern matching (not Set enumeration) to support parameterized types 
like
+ * TimeType(precision) or DecimalType(precision, scale).
+ */
+object TypeOps {
+
+  /**
+   * Returns a TypeOps instance for the given DataType, if supported by the 
framework.
+   *
+   * Returns None if the type is not supported or the framework is disabled. 
This is the single
+   * registration point for all server-side type operations.
+   *
+   * @param dt
+   *   the DataType to get operations for
+   * @return
+   *   Some(TypeOps) if supported, None otherwise
+   */
+  def apply(dt: DataType): Option[TypeOps] = {
+    if (!SQLConf.get.typesFrameworkEnabled) return None
+    dt match {
+      case tt: TimeType => Some(TimeTypeOps(tt))
+      // Add new types here - single registration point
+      case _ => None
+    }
+  }
+}
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
index d5719a35cb36..c51d80df3265 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
@@ -600,6 +600,16 @@ object SQLConf {
       .booleanConf
       .createWithDefaultFunction(() => Utils.isTesting)
 
+  val TYPES_FRAMEWORK_ENABLED =
+    buildConf("spark.sql.types.framework.enabled")
+      .internal()
+      .doc("When true, use the Types Framework for supported types (currently 
TimeType). " +
+        "The framework centralizes type-specific operations in Ops classes 
instead of " +
+        "scattered pattern matching. When false, use legacy scattered 
implementation.")
+      .version("4.2.0")
+      .booleanConf
+      .createWithDefaultFunction(() => Utils.isTesting)
+
   val EXTENDED_EXPLAIN_PROVIDERS = 
buildConf("spark.sql.extendedExplainProviders")
     .doc("A comma-separated list of classes that implement the" +
       " org.apache.spark.sql.ExtendedExplainGenerator trait. If provided, 
Spark will print" +
@@ -7103,6 +7113,8 @@ class SQLConf extends Serializable with Logging with 
SqlApiConf {
 
   def geospatialEnabled: Boolean = getConf(GEOSPATIAL_ENABLED)
 
+  def typesFrameworkEnabled: Boolean = getConf(TYPES_FRAMEWORK_ENABLED)
+
   def dataSourceV2JoinPushdown: Boolean = getConf(DATA_SOURCE_V2_JOIN_PUSHDOWN)
 
   def dynamicPartitionPruningEnabled: Boolean = 
getConf(DYNAMIC_PARTITION_PRUNING_ENABLED)


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to