Hi all,

I find the reason of this issue. It seems in the new version, if I do not
specify spark.default.parallelism in KafkaUtils.createstream, there will be
an exception since the kakfa stream creation stage. In the previous
versions, it seems Spark will use the default value.

Thanks!

Bill

On Thu, Nov 13, 2014 at 5:00 AM, Helena Edelson <helena.edel...@datastax.com
> wrote:

> I encounter no issues with streaming from kafka to spark in 1.1.0. Do you
> perhaps have a version conflict?
>
> Helena
> On Nov 13, 2014 12:55 AM, "Jay Vyas" <jayunit100.apa...@gmail.com> wrote:
>
>> Yup , very important that  n>1 for spark streaming jobs, If local use
>> local[2]....
>>
>> The thing to remember is that your spark receiver will take a thread to
>> itself and produce data , so u need another thread to consume it .
>>
>> In a cluster manager like yarn or mesos, the word thread Is not used
>> anymore, I guess has different meaning- you need 2 or more free compute
>> slots, and that should be guaranteed by looking to see how many free node
>> managers are running etc.
>>
>> On Nov 12, 2014, at 7:53 PM, "Shao, Saisai" <saisai.s...@intel.com>
>> wrote:
>>
>>  Did you configure Spark master as local, it should be local[n], n > 1
>> for local mode. Beside there’s a Kafka wordcount example in Spark Streaming
>> example, you can try that. I’ve tested with latest master, it’s OK.
>>
>>
>>
>> Thanks
>>
>> Jerry
>>
>>
>>
>> *From:* Tobias Pfeiffer [mailto:t...@preferred.jp <t...@preferred.jp>]
>> *Sent:* Thursday, November 13, 2014 8:45 AM
>> *To:* Bill Jay
>> *Cc:* u...@spark.incubator.apache.org
>> *Subject:* Re: Spark streaming cannot receive any message from Kafka
>>
>>
>>
>> Bill,
>>
>>
>>
>>   However, when I am currently using Spark 1.1.0. the Spark streaming
>> job cannot receive any messages from Kafka. I have not made any change to
>> the code.
>>
>>
>>
>> Do you see any suspicious messages in the log output?
>>
>>
>>
>> Tobias
>>
>>
>>
>>

Reply via email to