bowenliang123 commented on code in PR #5811:
URL: https://github.com/apache/kyuubi/pull/5811#discussion_r1413388739


##########
externals/kyuubi-spark-sql-engine/src/main/scala/org/apache/kyuubi/engine/spark/schema/RowSet.scala:
##########
@@ -24,15 +24,24 @@ import scala.collection.JavaConverters._
 import org.apache.hive.service.rpc.thrift._
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.execution.HiveResult
+import org.apache.spark.sql.execution.HiveResult.TimeFormatters
 import org.apache.spark.sql.types._
 
 import org.apache.kyuubi.util.RowSetUtils._
 
 object RowSet {
 
+  private val timeUnrelatedDataTypes: Set[DataType] =
+    Set(BooleanType, FloatType, BinaryType, StringType)
+
   def toHiveString(valueAndType: (Any, DataType), nested: Boolean = false): 
String = {
     // compatible w/ Spark 3.1 and above
-    val timeFormatters = HiveResult.getTimeFormatters
+    val timeFormatters: TimeFormatters = valueAndType match {

Review Comment:
   Good idea. 
    Maybe this is what you are suggesting, allowing to reuse existed 
timeFormatters and create instances if not provided.
   ```
     def toHiveString(
         valueAndType: (Any, DataType),
         nested: Boolean = false,
         timeFormatters: TimeFormatters = HiveResult.getTimeFormatters): String 
= {
       HiveResult.toHiveString(valueAndType, nested, timeFormatters)
     }
   ```
   The reused timeFormatters may not be dedicated to a dataset but to the scope 
of threading. Especially considering possible parallel execution on 
column-level or row-level.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to