Thanks Akhil Das-2: actually I tried setting spark.default.parallelism but no
effect :-/

I am running standalone and performing a mix of map/filter/foreachRDD. 

I had to force parallelism with repartition to get both workers to process
tasks, but I do not think this should be required (and I am not sure it's
not optimal). As I mentioned, without forcing it with repartition, there are
scheduled tasks on the queue that continue accumulating over time, so I
would expect Spark should assigning those to idle workers. Is my assumption
wrong? :-)

Thanks



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-scheduling-control-tp16778p16805.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to