Github user ueshin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18732#discussion_r141829344
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/python/ArrowEvalPythonExec.scala
 ---
    @@ -44,14 +44,17 @@ case class ArrowEvalPythonExec(udfs: Seq[PythonUDF], 
output: Seq[Attribute], chi
         val schemaOut = 
StructType.fromAttributes(output.drop(child.output.length).zipWithIndex
           .map { case (attr, i) => attr.withName(s"_$i") })
     
    +    val batchedIter: Iterator[Iterator[InternalRow]] =
    +      iter.grouped(conf.arrowMaxRecordsPerBatch).map(_.iterator)
    +
    --- End diff --
    
    I guess this is for making `ArrowPythonRunner` reusable between current 
pandas udf and `apply()` by taking `Iterator[Iterator[InternalRow]]` instead of 
`Iterator[InternalRow]` as its input. The rows in grouped iterator will be one 
`RecordBatch`.
    I'm not sure whether it's good or not, though.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to