Thanks everyone for the discussion.

Just to note, I restarted the job yet again, and this time there are indeed
tasks being executed by both worker nodes. So the behavior does seem
inconsistent/broken atm.

Then I added a third node to the cluster, and a third executor came up, and
everything broke :|



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/KafkaInputDStream-mapping-of-partitions-to-tasks-tp3360p3391.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to