[GitHub] spark pull request #20908: [WIP][SPARK-23672][PYTHON] Document support for n...

2018-04-06 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/20908#discussion_r179843510
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,15 @@ def random_udf(v):
 random_udf = random_udf.asNondeterministic()
 return random_udf
 
+def test_pandas_udf_tokenize(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
--- End diff --

@HyukjinKwon It doesn't, but given that the old documentation implied that 
the ionization usecase wouldn't work I thought it would be good to illustrate 
that it does in a test.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20908: [WIP][SPARK-23672][PYTHON] Document support for n...

2018-04-03 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/20908#discussion_r178995333
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,24 @@ def random_udf(v):
 random_udf = random_udf.asNondeterministic()
 return random_udf
 
+def test_pandas_udf_tokenize(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
+  ArrayType(StringType()))
+self.assertEqual(tokenize.returnType, ArrayType(StringType()))
+df = self.spark.createDataFrame([("hi boo",), ("bye boo",)], 
["vals"])
+result = df.select(tokenize("vals").alias("hi"))
+self.assertEqual([Row(hi=[u'hi', u'boo']), Row(hi=[u'bye', 
u'boo'])], result.collect())
+
+def test_pandas_udf_nested_arrays_does_not_work(self):
--- End diff --

Awesome, that makes more sense.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20908: [WIP][SPARK-23672][PYTHON] Document support for n...

2018-04-03 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/20908#discussion_r178987212
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,15 @@ def random_udf(v):
 random_udf = random_udf.asNondeterministic()
 return random_udf
 
+def test_pandas_udf_tokenize(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
--- End diff --

I dont think this PR targets to fix or support tokenizing in an udf ..


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20908: [WIP][SPARK-23672][PYTHON] Document support for n...

2018-04-03 Thread BryanCutler
Github user BryanCutler commented on a diff in the pull request:

https://github.com/apache/spark/pull/20908#discussion_r178905896
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,15 @@ def random_udf(v):
 random_udf = random_udf.asNondeterministic()
 return random_udf
 
+def test_pandas_udf_tokenize(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
--- End diff --

I think this is a pretty common use to tokenize, so I think it's fine to 
have an explicit test for this


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20908: [WIP][SPARK-23672][PYTHON] Document support for n...

2018-04-03 Thread BryanCutler
Github user BryanCutler commented on a diff in the pull request:

https://github.com/apache/spark/pull/20908#discussion_r178904674
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,24 @@ def random_udf(v):
 random_udf = random_udf.asNondeterministic()
 return random_udf
 
+def test_pandas_udf_tokenize(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
+  ArrayType(StringType()))
+self.assertEqual(tokenize.returnType, ArrayType(StringType()))
+df = self.spark.createDataFrame([("hi boo",), ("bye boo",)], 
["vals"])
+result = df.select(tokenize("vals").alias("hi"))
+self.assertEqual([Row(hi=[u'hi', u'boo']), Row(hi=[u'bye', 
u'boo'])], result.collect())
+
+def test_pandas_udf_nested_arrays_does_not_work(self):
--- End diff --

Sorry @holdenk , I should have been more clear about ArrayType support.  
Nested Arrays actually do work ok, it's primarily use with timestamps/dates 
that need to be adjusted, and lack of actual testing to verify it.  So it was 
easiest to just say nested Arrays are unsupported, but I'll update SPARK-21187 
to reflect this.

I ran the test below and it does work, you just need to define `df` from 
above (also `ArrowTypeError` isn't defined, but should just be `Exception` and 
`assertRaises` is expecting a callable where `result.collect()` is)


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20908: [WIP][SPARK-23672][PYTHON] Document support for n...

2018-04-02 Thread holdenk
Github user holdenk commented on a diff in the pull request:

https://github.com/apache/spark/pull/20908#discussion_r178606547
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,23 @@ def random_udf(v):
 random_udf = random_udf.asNondeterministic()
 return random_udf
 
+def test_pandas_udf_tokenize(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
+  ArrayType(StringType()))
+self.assertEqual(tokenize.returnType, ArrayType(StringType()))
+df = self.spark.createDataFrame([("hi boo",), ("bye boo",)], 
["vals"])
+result = df.select(tokenize("vals").alias("hi"))
+self.assertEqual([Row(hi=[u'hi', u'boo']), Row(hi=[u'bye', 
u'boo'])], result.collect())
+
+def test_pandas_udf_nested_arrays_does_not_work(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: [str.split(' 
')]),
+  ArrayType(ArrayType(StringType(
+result = df.select(tokenize("vals").alias("hi"))
+# If we start supporting nested arrays we should update the 
documentation in functions.py
+self.assertRaises(ArrowTypeError, result.collect())
--- End diff --

Sure, sounds good.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #20908: [WIP][SPARK-23672][PYTHON] Document support for n...

2018-04-02 Thread BryanCutler
Github user BryanCutler commented on a diff in the pull request:

https://github.com/apache/spark/pull/20908#discussion_r178600886
  
--- Diff: python/pyspark/sql/tests.py ---
@@ -3966,6 +3967,23 @@ def random_udf(v):
 random_udf = random_udf.asNondeterministic()
 return random_udf
 
+def test_pandas_udf_tokenize(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: str.split(' 
')),
+  ArrayType(StringType()))
+self.assertEqual(tokenize.returnType, ArrayType(StringType()))
+df = self.spark.createDataFrame([("hi boo",), ("bye boo",)], 
["vals"])
+result = df.select(tokenize("vals").alias("hi"))
+self.assertEqual([Row(hi=[u'hi', u'boo']), Row(hi=[u'bye', 
u'boo'])], result.collect())
+
+def test_pandas_udf_nested_arrays_does_not_work(self):
+from pyspark.sql.functions import pandas_udf
+tokenize = pandas_udf(lambda s: s.apply(lambda str: [str.split(' 
')]),
+  ArrayType(ArrayType(StringType(
+result = df.select(tokenize("vals").alias("hi"))
+# If we start supporting nested arrays we should update the 
documentation in functions.py
+self.assertRaises(ArrowTypeError, result.collect())
--- End diff --

Could you put this under `with QuietTest(self.sc):` to suppress the error?


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org