Yup , very important that  n>1 for spark streaming jobs, If local use 
local[2].... 

The thing to remember is that your spark receiver will take a thread to itself 
and produce data , so u need another thread to consume it .

In a cluster manager like yarn or mesos, the word thread Is not used anymore, I 
guess has different meaning- you need 2 or more free compute slots, and that 
should be guaranteed by looking to see how many free node managers are running 
etc.

> On Nov 12, 2014, at 7:53 PM, "Shao, Saisai" <saisai.s...@intel.com> wrote:
> 
> Did you configure Spark master as local, it should be local[n], n > 1 for 
> local mode. Beside there’s a Kafka wordcount example in Spark Streaming 
> example, you can try that. I’ve tested with latest master, it’s OK.
>  
> Thanks
> Jerry
>  
> From: Tobias Pfeiffer [mailto:t...@preferred.jp] 
> Sent: Thursday, November 13, 2014 8:45 AM
> To: Bill Jay
> Cc: u...@spark.incubator.apache.org
> Subject: Re: Spark streaming cannot receive any message from Kafka
>  
> Bill,
>  
> However, when I am currently using Spark 1.1.0. the Spark streaming job 
> cannot receive any messages from Kafka. I have not made any change to the 
> code.
>  
> Do you see any suspicious messages in the log output?
>  
> Tobias
>  

Reply via email to