[GitHub] [spark] d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] Support Dataframe Cogroup via Pandas UDFs

2019-09-15 Thread GitBox
d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] 
Support Dataframe Cogroup via Pandas UDFs
URL: https://github.com/apache/spark/pull/24981#discussion_r324452042
 
 

 ##
 File path: python/pyspark/sql/cogroup.py
 ##
 @@ -0,0 +1,98 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from pyspark import since
+from pyspark.rdd import PythonEvalType
+from pyspark.sql.column import Column
+from pyspark.sql.dataframe import DataFrame
+
+
+class CoGroupedData(object):
+"""
+A logical grouping of two :class:`GroupedData`,
+created by :func:`GroupedData.cogroup`.
+
+.. note:: Experimental
+
+.. versionadded:: 3.0
+"""
+
+def __init__(self, gd1, gd2):
+self._gd1 = gd1
+self._gd2 = gd2
+self.sql_ctx = gd1.sql_ctx
+
+@since(3.0)
+def apply(self, udf):
+"""
+Applies a function to each cogroup using a pandas udf and returns the 
result
+as a `DataFrame`.
+
+The user-defined function should take two `pandas.DataFrame` and 
return another
+``pandas.DataFrame``. For each side of the cogroup, all columns are 
passed together
 
 Review comment:
   yes sorry should be single


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] Support Dataframe Cogroup via Pandas UDFs

2019-08-20 Thread GitBox
d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] 
Support Dataframe Cogroup via Pandas UDFs
URL: https://github.com/apache/spark/pull/24981#discussion_r315761355
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/python/FlatMapGroupsInPandasExec.scala
 ##
 @@ -75,88 +71,23 @@ case class FlatMapGroupsInPandasExec(
   override protected def doExecute(): RDD[InternalRow] = {
 val inputRDD = child.execute()
 
-val chainedFunc = Seq(ChainedPythonFunctions(Seq(pandasFunction)))
-val sessionLocalTimeZone = conf.sessionLocalTimeZone
-val pythonRunnerConf = ArrowUtils.getPythonRunnerConfMap(conf)
-
-// Deduplicate the grouping attributes.
-// If a grouping attribute also appears in data attributes, then we don't 
need to send the
-// grouping attribute to Python worker. If a grouping attribute is not in 
data attributes,
-// then we need to send this grouping attribute to python worker.
-//
-// We use argOffsets to distinguish grouping attributes and data 
attributes as following:
-//
-// argOffsets[0] is the length of grouping attributes
-// argOffsets[1 .. argOffsets[0]+1] is the arg offsets for grouping 
attributes
-// argOffsets[argOffsets[0]+1 .. ] is the arg offsets for data attributes
-
-val dataAttributes = child.output.drop(groupingAttributes.length)
-val groupingIndicesInData = groupingAttributes.map { attribute =>
-  dataAttributes.indexWhere(attribute.semanticEquals)
-}
-
-val groupingArgOffsets = new ArrayBuffer[Int]
-val nonDupGroupingAttributes = new ArrayBuffer[Attribute]
-val nonDupGroupingSize = groupingIndicesInData.count(_ == -1)
-
-// Non duplicate grouping attributes are added to nonDupGroupingAttributes 
and
 
 Review comment:
   So the comments are definitely still needed as the mechanism is essentially 
the same and this is somewhat complex.  These comments have moved to 
BasePandasGroupExec.resolveArgOffsets- the wording has changed slightly but I 
think all the same information is there.  If you think anything is missing or 
not clear please let me know and I'll be happy to amend.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] Support Dataframe Cogroup via Pandas UDFs

2019-08-20 Thread GitBox
d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] 
Support Dataframe Cogroup via Pandas UDFs
URL: https://github.com/apache/spark/pull/24981#discussion_r315755503
 
 

 ##
 File path: python/pyspark/serializers.py
 ##
 @@ -356,6 +356,33 @@ def __repr__(self):
 return "ArrowStreamPandasSerializer"
 
 
+class InterleavedArrowReader(object):
 
 Review comment:
   after @BryanCutler's suggestion we no longer need this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] Support Dataframe Cogroup via Pandas UDFs

2019-08-20 Thread GitBox
d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] 
Support Dataframe Cogroup via Pandas UDFs
URL: https://github.com/apache/spark/pull/24981#discussion_r315754866
 
 

 ##
 File path: python/pyspark/sql/tests/test_pandas_udf_cogrouped_map.py
 ##
 @@ -0,0 +1,285 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+import datetime
+import unittest
+import sys
+
+from collections import OrderedDict
+from decimal import Decimal
+
+from pyspark.sql import Row
+from pyspark.sql.functions import array, explode, col, lit, udf, sum, 
pandas_udf, PandasUDFType
+from pyspark.sql.types import *
+from pyspark.testing.sqlutils import ReusedSQLTestCase, have_pandas, 
have_pyarrow, \
+pandas_requirement_message, pyarrow_requirement_message
+from pyspark.testing.utils import QuietTest
+
+if have_pandas:
+import pandas as pd
+from pandas.util.testing import assert_frame_equal, assert_series_equal
+
+if have_pyarrow:
+import pyarrow as pa
+
+
+"""
+Tests below use pd.DataFrame.assign that will infer mixed types (unicode/str) 
for column names
+from kwargs w/ Python 2, so need to set check_column_type=False and avoid this 
check
+"""
+if sys.version < '3':
+_check_column_type = False
+else:
+_check_column_type = True
+
+
+@unittest.skipIf(
+not have_pandas or not have_pyarrow,
+pandas_requirement_message or pyarrow_requirement_message)
+class CoGroupedMapPandasUDFTests(ReusedSQLTestCase):
+
+@property
+def data1(self):
+return self.spark.range(10).toDF('id') \
+.withColumn("ks", array([lit(i) for i in range(20, 30)])) \
+.withColumn("k", explode(col('ks')))\
+.withColumn("v", col('k') * 10)\
+.drop('ks')
+
+@property
+def data2(self):
+return self.spark.range(10).toDF('id') \
+.withColumn("ks", array([lit(i) for i in range(20, 30)])) \
+.withColumn("k", explode(col('ks'))) \
+.withColumn("v2", col('k') * 100) \
+.drop('ks')
+
+def test_simple(self):
+self._test_merge(self.data1, self.data2)
+
+def test_left_group_empty(self):
+left = self.data1.where(col("id") % 2 == 0)
+self._test_merge(left, self.data2)
+
+def test_right_group_empty(self):
+right = self.data2.where(col("id") % 2 == 0)
+self._test_merge(self.data1, right)
+
+def test_different_schemas(self):
+right = self.data2.withColumn('v3', lit('a'))
+self._test_merge(self.data1, right, 'id long, k int, v int, v2 int, v3 
string')
+
+def test_complex_group_by(self):
+left = pd.DataFrame.from_dict({
+'id': [1, 2, 3],
+'k':  [5, 6, 7],
+'v': [9, 10, 11]
+})
+
+right = pd.DataFrame.from_dict({
+'id': [11, 12, 13],
+'k': [5, 6, 7],
+'v2': [90, 100, 110]
+})
+
+left_df = self.spark\
 
 Review comment:
   yes- that's better- done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] Support Dataframe Cogroup via Pandas UDFs

2019-08-20 Thread GitBox
d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] 
Support Dataframe Cogroup via Pandas UDFs
URL: https://github.com/apache/spark/pull/24981#discussion_r315752866
 
 

 ##
 File path: python/pyspark/worker.py
 ##
 @@ -359,10 +417,24 @@ def map_batch(batch):
 arg_offsets, udf = read_single_udf(
 pickleSer, infile, eval_type, runner_conf, udf_index=0)
 udfs['f'] = udf
-split_offset = arg_offsets[0] + 1
-arg0 = ["a[%d]" % o for o in arg_offsets[1: split_offset]]
-arg1 = ["a[%d]" % o for o in arg_offsets[split_offset:]]
-mapper_str = "lambda a: f([%s], [%s])" % (", ".join(arg0), ", 
".join(arg1))
+parsed_offsets = extract_key_value_indexes()
+keys = ["a[%d]" % o for o in parsed_offsets[0][0]]
 
 Review comment:
   thanks that's a useful tip


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] Support Dataframe Cogroup via Pandas UDFs

2019-08-20 Thread GitBox
d80tb7 commented on a change in pull request #24981: [SPARK-27463][PYTHON] 
Support Dataframe Cogroup via Pandas UDFs
URL: https://github.com/apache/spark/pull/24981#discussion_r315750156
 
 

 ##
 File path: python/pyspark/serializers.py
 ##
 @@ -401,6 +427,22 @@ def __repr__(self):
 return "ArrowStreamPandasUDFSerializer"
 
 
+class InterleavedArrowStreamPandasSerializer(ArrowStreamPandasUDFSerializer):
+
+def __init__(self, timezone, safecheck, assign_cols_by_name):
+super(InterleavedArrowStreamPandasSerializer, self).__init__(timezone, 
safecheck, assign_cols_by_name)
+
+def load_stream(self, stream):
+"""
+Deserialize ArrowRecordBatches to an Arrow table and return as a list 
of pandas.Series.
+"""
+reader = InterleavedArrowReader(stream)
 
 Review comment:
   doh! yes you're quick right- this is much better.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org