Github user gatorsmile commented on a diff in the pull request:
    --- Diff: python/pyspark/sql/ ---
    @@ -4325,15 +4333,16 @@ def data(self):
                 .withColumn("vs", array([lit(i) for i in range(20, 30)])) \
                 .withColumn("v", explode(col('vs'))).drop('vs')
    -    def test_simple(self):
    -        from pyspark.sql.functions import pandas_udf, PandasUDFType
    -        df =
    +    def test_supported_types(self):
    --- End diff --
    I start to worry about the test coverage of vectorized udfs and arrow-based 
to/from pandas df. Do we have any plan in PySpark to test all the data types? 


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to