Github user davies commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11347#discussion_r54328489
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
    @@ -1753,15 +1753,26 @@ class DataFrame private[sql](
        * Converts a JavaRDD to a PythonRDD.
        */
       protected[sql] def javaToPython: JavaRDD[Array[Byte]] = {
    -    val structType = schema  // capture it for closure
    -    val rdd = queryExecution.toRdd.map(EvaluatePython.toJava(_, 
structType))
    -    EvaluatePython.javaToPython(rdd)
    +    if (isOutputPickled) {
    +      queryExecution.toRdd.map(_.getBinary(0))
    --- End diff --
    
    It will be faster when do the count() in Python, then you don't need to 
pass all these rows into JVM.
    
    Even for groupByKey(), if the key_func is Python function, the rows will be 
deserialized in Python.
    
    For in-memory cache, the rows are packed as batched in JVM, so it's not 
true that we need the rows are serialized separately to be able to be processed 
in JVM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to