Github user BryanCutler commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20908#discussion_r178904674
  
    --- Diff: python/pyspark/sql/tests.py ---
    @@ -3966,6 +3967,24 @@ def random_udf(v):
             random_udf = random_udf.asNondeterministic()
             return random_udf
     
    +    def test_pandas_udf_tokenize(self):
    +        from pyspark.sql.functions import pandas_udf
    +        tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
    +                              ArrayType(StringType()))
    +        self.assertEqual(tokenize.returnType, ArrayType(StringType()))
    +        df = self.spark.createDataFrame([("hi boo",), ("bye boo",)], 
["vals"])
    +        result = df.select(tokenize("vals").alias("hi"))
    +        self.assertEqual([Row(hi=[u'hi', u'boo']), Row(hi=[u'bye', 
u'boo'])], result.collect())
    +
    +    def test_pandas_udf_nested_arrays_does_not_work(self):
    --- End diff --
    
    Sorry @holdenk , I should have been more clear about ArrayType support.  
Nested Arrays actually do work ok, it's primarily use with timestamps/dates 
that need to be adjusted, and lack of actual testing to verify it.  So it was 
easiest to just say nested Arrays are unsupported, but I'll update SPARK-21187 
to reflect this.
    
    I ran the test below and it does work, you just need to define `df` from 
above (also `ArrowTypeError` isn't defined, but should just be `Exception` and 
`assertRaises` is expecting a callable where `result.collect()` is)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to