Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22305#discussion_r224548624
  
    --- Diff: 
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
    @@ -63,7 +65,7 @@ private[spark] object PythonEvalType {
      */
     private[spark] abstract class BasePythonRunner[IN, OUT](
         funcs: Seq[ChainedPythonFunctions],
    -    evalType: Int,
    +    evalTypes: Seq[Int],
    --- End diff --
    
    I see your point - I can see this being used for other things too, for 
example, numpy variant vectorized UDFs, or Window transform UDFs for unbounded 
window (n -> n mapping for unbounded window, such as rank). I choose this 
approach because of the flexibility.
    
    For this particular case, it is possible to distinguish between 
bounded/unbounded, for example, maybe sending something in the arg offsets or 
sth like that, but this would be using arg offsets for sth else...
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to