[
https://issues.apache.org/jira/browse/SPARK-2944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14099317#comment-14099317
]
Xiangrui Meng commented on SPARK-2944:
--------------------------------------
I changed the priority to Major because I couldn't re-produce the bug in a
deterministic way, nor I could verify whether this is an issue introduced after
v1.0. It seems that it only happens when each task is very small.
> sc.makeRDD doesn't distribute partitions evenly
> -----------------------------------------------
>
> Key: SPARK-2944
> URL: https://issues.apache.org/jira/browse/SPARK-2944
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 1.1.0
> Reporter: Xiangrui Meng
> Assignee: Xiangrui Meng
>
> 16 nodes EC2 cluster:
> {code}
> val rdd = sc.makeRDD(0 until 1e9.toInt, 1000).cache()
> rdd.count()
> {code}
> Saw 156 partitions on one node while only 8 partitions on another.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]