Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/20137#discussion_r159506133
--- Diff: python/pyspark/sql/udf.py ---
@@ -130,14 +133,17 @@ def _create_judf(self):
wrapped_func = _wrap_function(sc, self.func, self.returnType)
jdt = spark._jsparkSession.parseDataType(self.returnType.json())
judf =
sc._jvm.org.apache.spark.sql.execution.python.UserDefinedPythonFunction(
- self._name, wrapped_func, jdt, self.evalType,
self._deterministic)
+ self._name, wrapped_func, jdt, self.evalType,
self.deterministic)
return judf
def __call__(self, *cols):
judf = self._judf
sc = SparkContext._active_spark_context
return Column(judf.apply(_to_seq(sc, cols, _to_java_column)))
+ # This function is for improving the online help system in the
interactive interpreter.
+ # For example, the built-in help / pydoc.help. It wraps the UDF with
the docstring and
+ # argument annotation. (See: SPARK-19161)
--- End diff --
I think we can put this in the docstring of `_wrapped` between L148 and
150L.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]