[ 
https://issues.apache.org/jira/browse/SPARK-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14091699#comment-14091699
 ] 

Patrick Wendell commented on SPARK-2944:
----------------------------------------

Hey [~mengxr], do you know how the behavior differs from Spark 1.0? Also, if 
there is a clear difference, could you see if the behavior is modified by this 
patch?

https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=63bdb1f41b4895e3a9444f7938094438a94d3007

> sc.makeRDD doesn't distribute partitions evenly
> -----------------------------------------------
>
>                 Key: SPARK-2944
>                 URL: https://issues.apache.org/jira/browse/SPARK-2944
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.1.0
>            Reporter: Xiangrui Meng
>            Assignee: Xiangrui Meng
>            Priority: Critical
>
> 16 nodes EC2 cluster:
> {code}
> val rdd = sc.makeRDD(0 until 1e9.toInt, 1000).cache()
> rdd.count()
> {code}
> Saw 156 partitions on one node while only 8 partitions on another.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to