srowen commented on a change in pull request #28648:
URL: https://github.com/apache/spark/pull/28648#discussion_r431864565
##########
File path: python/pyspark/context.py
##########
@@ -864,8 +865,21 @@ def union(self, rdds):
first_jrdd_deserializer = rdds[0]._jrdd_deserializer
if any(x._jrdd_deserializer != first_jrdd_deserializer for x in rdds):
rdds = [x._reserialize() for x in rdds]
- cls = SparkContext._jvm.org.apache.spark.api.java.JavaRDD
- jrdds = SparkContext._gateway.new_array(cls, len(rdds))
+ gw = SparkContext._gateway
+ jvm = SparkContext._jvm
+ jrdd_cls = jvm.org.apache.spark.api.java.JavaRDD
+ jpair_rdd_cls = jvm.org.apache.spark.api.java.JavaPairRDD
+ jdouble_rdd_cls = jvm.org.apache.spark.api.java.JavaDoubleRDD
+ if is_instance_of(gw, rdds[0]._jrdd, jrdd_cls):
+ cls = jrdd_cls
+ elif is_instance_of(gw, rdds[0]._jrdd, jpair_rdd_cls):
Review comment:
So if the first one is a JavaPairRDD, but another is a JavaRDD, this
will fail I think when the array is created on the JVM side? do we want to
check whether _all_ are a pair or double RDD? I haven't thought through that
but it occurred to me.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]