As it stands currently, no.

If you're already overriding the dstream, it would be pretty
straightforward to change the kafka parameters used when creating the rdd
for the next batch though

On Wed, Aug 26, 2015 at 11:41 PM, Shushant Arora <shushantaror...@gmail.com>
wrote:

> Can I change this param fetch.message.max.bytes  or 
> spark.streaming.kafka.maxRatePerPartition
> at run time across batches.
> Say I detected some fail condition in my system and I decided to sonsume i
> next batch interval only 10 messages per partition and if that succeed I
> reset the max limit to unlimited again .
>
> On Wed, Aug 26, 2015 at 9:32 PM, Cody Koeninger <c...@koeninger.org>
> wrote:
>
>> see http://kafka.apache.org/documentation.html#consumerconfigs
>>
>> fetch.message.max.bytes
>>
>> in the kafka params passed to the constructor
>>
>>
>> On Wed, Aug 26, 2015 at 10:39 AM, Shushant Arora <
>> shushantaror...@gmail.com> wrote:
>>
>>> whats the default buffer in spark streaming 1.3 for  kafka messages.
>>>
>>> Say In this run it has to fetch messages from offset 1 to 10000. will it
>>> fetch all in one go or internally it fetches messages in  few messages
>>> batch.
>>>
>>> Is there any setting to configure this no of offsets fetched in one
>>> batch?
>>>
>>
>>
>

Reply via email to