Github user icexelloss commented on a diff in the pull request:
https://github.com/apache/spark/pull/22305#discussion_r227591428
--- Diff:
core/src/main/scala/org/apache/spark/api/python/PythonRunner.scala ---
@@ -63,7 +65,7 @@ private[spark] object PythonEvalType {
*/
private[spark] abstract class BasePythonRunner[IN, OUT](
funcs: Seq[ChainedPythonFunctions],
- evalType: Int,
+ evalTypes: Seq[Int],
--- End diff --
> So couldn't you just send an index that encompasses the entire range for
unbounded
This is actually what I first did. However, I think this would require
sending more data than necessary for the unbounded case. In worst case it will
be 3x the number of columns (begin_index, end_index, data) comparing to just
one column (data).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]