Github user holdenk commented on a diff in the pull request:
    --- Diff: python/pyspark/sql/ ---
    @@ -3966,6 +3967,15 @@ def random_udf(v):
             random_udf = random_udf.asNondeterministic()
             return random_udf
    +    def test_pandas_udf_tokenize(self):
    +        from pyspark.sql.functions import pandas_udf
    +        tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
    --- End diff --
    @HyukjinKwon It doesn't, but given that the old documentation implied that 
the ionization usecase wouldn't work I thought it would be good to illustrate 
that it does in a test.


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to