Github user ueshin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21977#discussion_r208449418
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/python/AggregateInPandasExec.scala
 ---
    @@ -137,13 +135,12 @@ case class AggregateInPandasExec(
     
           val columnarBatchIter = new ArrowPythonRunner(
             pyFuncs,
    -        bufferSize,
    -        reuseWorker,
             PythonEvalType.SQL_GROUPED_AGG_PANDAS_UDF,
             argOffsets,
             aggInputSchema,
             sessionLocalTimeZone,
    -        pythonRunnerConf).compute(projectedRowIter, context.partitionId(), 
context)
    +        pythonRunnerConf,
    +        sparkContext.conf).compute(projectedRowIter, 
context.partitionId(), context)
    --- End diff --
    
    Seems like this is in executor side, but can we get `sparkContext`?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to