You should be able to recompile the streaming-kafka project against 1.2,
let me know if you run into any issues.

>From a usability standpoint, the only relevant thing I can think of that
was added after 1.2 was being able to get the partitionId off of the task
context... you can just use mapPartitionsWithIndex as a workaround

On Wed, Aug 5, 2015 at 8:18 PM, Sourabh Chandak <sourabh3...@gmail.com>
wrote:

> Thanks Tathagata. I tried that but BlockGenerator internally uses
> SystemClock which is again private.
>
> We are using DSE so stuck with Spark 1.2 hence can't use the receiver-less
> version. Is it possible to use the same code as a separate API with 1.2?
>
> Thanks,
> Sourabh
>
> On Wed, Aug 5, 2015 at 6:13 PM, Tathagata Das <t...@databricks.com> wrote:
>
>>  You could very easily strip out the BlockGenerator code from the Spark
>> source code and use it directly in the same way the Reliable Kafka Receiver
>> uses it. BTW, you should know that we will be deprecating the receiver
>> based approach for the Direct Kafka approach. That is quite flexible, can
>> give exactly-once guarantee without WAL, and is more robust and performant.
>> Consider using it.
>>
>>
>> On Wed, Aug 5, 2015 at 5:48 PM, Sourabh Chandak <sourabh3...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I am trying to replicate the Kafka Streaming Receiver for a custom
>>> version of Kafka and want to create a Reliable receiver. The current
>>> implementation uses BlockGenerator which is a private class inside Spark
>>> streaming hence I can't use that in my code. Can someone help me with some
>>> resources to tackle this issue?
>>>
>>>
>>>
>>> Thanks,
>>> Sourabh
>>>
>>
>>
>

Reply via email to