[jira] [Assigned] (SPARK-21985) PySpark PairDeserializer is broken for double-zipped RDDs
[ https://issues.apache.org/jira/browse/SPARK-21985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon reassigned SPARK-21985: Assignee: Andrew Ray > PySpark PairDeserializer is broken for double-zipped RDDs > - > > Key: SPARK-21985 > URL: https://issues.apache.org/jira/browse/SPARK-21985 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 2.1.0, 2.1.1, 2.2.0 >Reporter: Stuart Berg >Assignee: Andrew Ray > Labels: bug > Fix For: 2.1.2, 2.2.1, 2.3.0 > > > PySpark fails to deserialize double-zipped RDDs. For example, the following > example used to work in Spark 2.0.2: > {code:} > >>> a = sc.parallelize('aaa') > >>> b = sc.parallelize('bbb') > >>> c = sc.parallelize('ccc') > >>> a_bc = a.zip( b.zip(c) ) > >>> a_bc.collect() > [('a', ('b', 'c')), ('a', ('b', 'c')), ('a', ('b', 'c'))] > {code} > But in Spark >=2.1.0, it fails (regardless of Python 2 vs 3): > {code:} > >>> a_bc.collect() > Traceback (most recent call last): > File "", line 1, in > File "/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line > 810, in collect > return list(_load_from_socket(port, self._jrdd_deserializer)) > File "/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/serializers.py", > line 329, in _load_stream_without_unbatching > if len(key_batch) != len(val_batch): > TypeError: object of type 'itertools.izip' has no len() > {code} > As you can see, the error seems to be caused by [a check in the > PairDeserializer > class|https://github.com/apache/spark/blob/d03aebbe6508ba441dc87f9546f27aeb27553d77/python/pyspark/serializers.py#L346-L348]: > {code:} > if len(key_batch) != len(val_batch): > raise ValueError("Can not deserialize PairRDD with different number of > items" > " in batches: (%d, %d)" % (len(key_batch), > len(val_batch))) > {code} > If that check is removed, then the example above works without error. Can > the check simply be removed? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-21985) PySpark PairDeserializer is broken for double-zipped RDDs
[ https://issues.apache.org/jira/browse/SPARK-21985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-21985: Assignee: Apache Spark > PySpark PairDeserializer is broken for double-zipped RDDs > - > > Key: SPARK-21985 > URL: https://issues.apache.org/jira/browse/SPARK-21985 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 2.1.0, 2.1.1, 2.2.0 >Reporter: Stuart Berg >Assignee: Apache Spark > Labels: bug > > PySpark fails to deserialize double-zipped RDDs. For example, the following > example used to work in Spark 2.0.2: > {code:} > >>> a = sc.parallelize('aaa') > >>> b = sc.parallelize('bbb') > >>> c = sc.parallelize('ccc') > >>> a_bc = a.zip( b.zip(c) ) > >>> a_bc.collect() > [('a', ('b', 'c')), ('a', ('b', 'c')), ('a', ('b', 'c'))] > {code} > But in Spark >=2.1.0, it fails (regardless of Python 2 vs 3): > {code:} > >>> a_bc.collect() > Traceback (most recent call last): > File "", line 1, in > File "/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line > 810, in collect > return list(_load_from_socket(port, self._jrdd_deserializer)) > File "/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/serializers.py", > line 329, in _load_stream_without_unbatching > if len(key_batch) != len(val_batch): > TypeError: object of type 'itertools.izip' has no len() > {code} > As you can see, the error seems to be caused by [a check in the > PairDeserializer > class|https://github.com/apache/spark/blob/d03aebbe6508ba441dc87f9546f27aeb27553d77/python/pyspark/serializers.py#L346-L348]: > {code:} > if len(key_batch) != len(val_batch): > raise ValueError("Can not deserialize PairRDD with different number of > items" > " in batches: (%d, %d)" % (len(key_batch), > len(val_batch))) > {code} > If that check is removed, then the example above works without error. Can > the check simply be removed? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-21985) PySpark PairDeserializer is broken for double-zipped RDDs
[ https://issues.apache.org/jira/browse/SPARK-21985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-21985: Assignee: (was: Apache Spark) > PySpark PairDeserializer is broken for double-zipped RDDs > - > > Key: SPARK-21985 > URL: https://issues.apache.org/jira/browse/SPARK-21985 > Project: Spark > Issue Type: Bug > Components: PySpark >Affects Versions: 2.1.0, 2.1.1, 2.2.0 >Reporter: Stuart Berg > Labels: bug > > PySpark fails to deserialize double-zipped RDDs. For example, the following > example used to work in Spark 2.0.2: > {code:} > >>> a = sc.parallelize('aaa') > >>> b = sc.parallelize('bbb') > >>> c = sc.parallelize('ccc') > >>> a_bc = a.zip( b.zip(c) ) > >>> a_bc.collect() > [('a', ('b', 'c')), ('a', ('b', 'c')), ('a', ('b', 'c'))] > {code} > But in Spark >=2.1.0, it fails (regardless of Python 2 vs 3): > {code:} > >>> a_bc.collect() > Traceback (most recent call last): > File "", line 1, in > File "/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line > 810, in collect > return list(_load_from_socket(port, self._jrdd_deserializer)) > File "/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/serializers.py", > line 329, in _load_stream_without_unbatching > if len(key_batch) != len(val_batch): > TypeError: object of type 'itertools.izip' has no len() > {code} > As you can see, the error seems to be caused by [a check in the > PairDeserializer > class|https://github.com/apache/spark/blob/d03aebbe6508ba441dc87f9546f27aeb27553d77/python/pyspark/serializers.py#L346-L348]: > {code:} > if len(key_batch) != len(val_batch): > raise ValueError("Can not deserialize PairRDD with different number of > items" > " in batches: (%d, %d)" % (len(key_batch), > len(val_batch))) > {code} > If that check is removed, then the example above works without error. Can > the check simply be removed? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org