Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18732#discussion_r141886413
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/python/ArrowEvalPythonExec.scala
 ---
    @@ -44,14 +44,17 @@ case class ArrowEvalPythonExec(udfs: Seq[PythonUDF], 
output: Seq[Attribute], chi
         val schemaOut = 
StructType.fromAttributes(output.drop(child.output.length).zipWithIndex
           .map { case (attr, i) => attr.withName(s"_$i") })
     
    +    val batchedIter: Iterator[Iterator[InternalRow]] =
    +      iter.grouped(conf.arrowMaxRecordsPerBatch).map(_.iterator)
    +
    --- End diff --
    
    I actually find this code doesn't work now. I will fix it.
    
    @ueshin is right, this is to reuse `ArrowEvalPython` for both the current 
pandas udf and `apply()`. I basically want to lift the batching logic out of 
`ArrowEvalPython` so the called and decide how they want rows to be batched 
into `RecordBatch`. 
    
    In the current pandas udf case, it batches it by 
`conf.arrowMaxRecordsPerBatch` and in `apply` it batches by one group per batch.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to