[ 
https://issues.apache.org/jira/browse/SPARK-3095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen resolved SPARK-3095.
-------------------------------
    Resolution: Won't Fix

I'm going to close this as "Won't Fix", since changing the count() to 
implicitly cache will break user programs that rely on the existing behavior.

> [PySpark] Speed up RDD.count()
> ------------------------------
>
>                 Key: SPARK-3095
>                 URL: https://issues.apache.org/jira/browse/SPARK-3095
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>            Reporter: Davies Liu
>            Assignee: Davies Liu
>            Priority: Minor
>
> RDD.count() can fall back to RDD._jrdd.count(), when the RDD is not 
> PipelineRDD.
> If the JavaRDD is serialized in batch mode, it's possible to skip the 
> deserialization of chunks (except the last one), because they all have the 
> same number of elements in them. There are some special cases that the chunks 
> are re-ordered, so this will not work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to