zero323 commented on a change in pull request #34951:
URL: https://github.com/apache/spark/pull/34951#discussion_r772705290



##########
File path: python/pyspark/sql/functions.py
##########
@@ -79,16 +80,34 @@ def _invoke_function(name: str, *args: Any) -> Column:
     and wraps the result with :class:`~pyspark.sql.Column`.
     """
     assert SparkContext._active_spark_context is not None
-    jf = _get_get_jvm_function(name, SparkContext._active_spark_context)
+    jf = _get_jvm_function(name, SparkContext._active_spark_context)
     return Column(jf(*args))
 
 
+def _invoke_function_over_columns(name: str, *cols: "ColumnOrName") -> Column:
+    """
+    Invokes n-ary JVM function identified by name
+    and wraps the result with :class:`~pyspark.sql.Column`.
+    """
+    return _invoke_function(name, *(_to_java_column(col) for col in cols))
+
+
 def _invoke_function_over_column(name: str, col: "ColumnOrName") -> Column:

Review comment:
       That was my initial choice and then I looked at  all the 
`_invoke_function_over_columns(col)` and it seemed a little bit confusing ‒ 
like there is something missing. It is highly subjective though, so I won't 
insist on this particular approach :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to