Hello,

I have a cluster 1 master and 2 slaves running on 1.1.0. I am having
problems to get both slaves working at the same time. When I launch the
driver on the master, one of the slaves is assigned the receiver task, and
initially both slaves start processing tasks. After a few tens of batches,
the slave running the receiver starts processing all tasks, and the other
won't execute any task more. If I cancel the execution and start over, the
roles may switch if the other slave gets to be assigned the receiver, but
the behaviour is the same, and the other slave will stop processing tasks
after a short while. So both slaves are working, essentially, but never at
the same time in a consistent way. No errors on logs, etc.

I have tried increasing partitions (up to 100, while slaves have 4 cores
each) but no success :-/

I understand that Spark may decide not to distribute tasks to all workers
due to data locality, etc. but in this case I think there is something else,
since one slave cannot keep up with the processing rate and the total delay
keeps growing: I have set up the batch interval to 1s, but each batch is
processed in 1.6s so after some time the delay (and the enqueued data) is
just too much. 

Does Spark take into consideration this time restriction on the scheduling?
I mean total processing time <= batch duration. Any configuration affecting
that? 

Am I missing something important? Any hints or things to tests?  

Thanks in advance! ;-)



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-scheduling-control-tp16778.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to