Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/20908#discussion_r178905896
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,15 @@ def random_udf(v):
random_udf = random_udf.asNondeterministic()
return random_udf
+ def test_pandas_udf_tokenize(self):
+ from pyspark.sql.functions import pandas_udf
+ tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split('
')),
--- End diff --
I think this is a pretty common use to tokenize, so I think it's fine to
have an explicit test for this
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]