ivoson commented on code in PR #43642:
URL: https://github.com/apache/spark/pull/43642#discussion_r1383153910


##########
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/UnsafeRowConverterSuite.scala:
##########
@@ -313,7 +313,7 @@ class UnsafeRowConverterSuite extends SparkFunSuite with 
Matchers with PlanTestB
     val converter = factory.create(fieldTypes)
 
     val row = new SpecificInternalRow(fieldTypes)
-    val values = Array(new CalendarInterval(0, 7, 0L), null)
+    val values = Seq(new CalendarInterval(0, 7, 0L), null)

Review Comment:
   Change to `toImmutableArraySeq `



##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala:
##########
@@ -89,7 +89,8 @@ object JDBCRDD extends Logging {
    * @return A Catalyst schema corresponding to columns in the given order.
    */
   private def pruneSchema(schema: StructType, columns: Array[String]): 
StructType = {
-    val fieldMap = Map(schema.fields.map(x => x.name -> x): _*)
+    import org.apache.spark.util.ArrayImplicits._
+    val fieldMap = Map(schema.fields.map(x => x.name -> 
x).toImmutableArraySeq: _*)

Review Comment:
   Thanks, done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to