Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/11347#discussion_r54329264
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -1753,15 +1753,26 @@ class DataFrame private[sql](
* Converts a JavaRDD to a PythonRDD.
*/
protected[sql] def javaToPython: JavaRDD[Array[Byte]] = {
- val structType = schema // capture it for closure
- val rdd = queryExecution.toRdd.map(EvaluatePython.toJava(_,
structType))
- EvaluatePython.javaToPython(rdd)
+ if (isOutputPickled) {
+ queryExecution.toRdd.map(_.getBinary(0))
--- End diff --
For `groupByKey`, we need the JVM to do grouping for us, i.e. cluster by
key and sort by key. If the grouped Dataset contains custom objects, we need to
send it JVM and keep it as binary while do the grouping. As we need to append
the key data to every input row, we should use un-batched pickler here.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]