Hi,

This is an interesting point of view. I thought the HashPartitioner works
completely differently.
Here's my understanding - the HashPartitioner defines how keys are
distributed within a dataset between the different partitions, but play no
role in assigning each partition for processing by executors.
I may be wrong so please let me know if thats the case :)

In my case the partitions are even - so the dataset is distributed evenly
between partitions. Its just that they are processed very unevenly - 1-2
nodes handle much more partitions than the other cluster members.

Also note that the cluster is made of identical nodes in terms of HW so its
not like one of the nodes just "works" quicker.

Thanks,
Borislav



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-work-distribution-among-execs-tp26502p26508.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to