Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1598#discussion_r15670058
--- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
---
@@ -701,6 +682,53 @@ private[spark] object PythonRDD extends Logging {
}
}
+
+ /**
+ * Convert an RDD of serialized Python dictionaries to Scala Maps (no
recursive conversions).
+ * This function is outdated, PySpark does not use it anymore
+ */
+ def pythonToJavaMap(pyRDD: JavaRDD[Array[Byte]]): JavaRDD[Map[String,
_]] = {
+ pyRDD.rdd.mapPartitions { iter =>
+ val unpickle = new Unpickler
+ iter.flatMap { row =>
+ unpickle.loads(row) match {
+ // in case of objects are pickled in batch mode
+ case objs: JArrayList[JMap[String, _] @unchecked] =>
objs.map(_.toMap)
+ // not in batch mode
+ case obj: JMap[String @unchecked, _] => Seq(obj.toMap)
+ }
+ }
+ }
+ }
+
+ /**
+ * Convert an RDD of serialized Python tuple to Array (no recursive
conversions).
+ * It is only used by pyspark.sql.
+ */
+ def pythonToJava(pyRDD: JavaRDD[Array[Byte]]): JavaRDD[Array[_]] = {
+ pyRDD.rdd.mapPartitions { iter =>
+ val unpickle = new Unpickler
+ iter.flatMap { row =>
+ unpickle.loads(row) match {
+ // in case of objects are pickled in batch mode
+ case objs: JArrayList[_] => Try(objs.map(obj => obj match {
--- End diff --
Does this behave properly in the unbatched case? In general, I don't think
that it's safe to detect batching by checking whether the returned item is a
list, since that might inadvertently flatten a list-of-lists that was
serialized without batching.
(see https://github.com/apache/spark/pull/1338#discussion-diff-15508664)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---