d as you
may have a dependency on external systems which may respond slow in some
rare but possible scenarios. This is why i also implement 3rd approach
which also alerts me well in advance when my consumer is marked dead due to
some reason.
Regards,
Vinay Sharma
On Mon, May 2, 2016 at 11:53 P
al piece to my jigsaw. If I get that exception I know I can
> safely abandon the current batch of records I am processing and return to
> my poll command knowing that the ones I haven’t committed will be picked up
> after the rebalance completes.
>
> Many thanks,
> Phil
>
> -Orig
which is not yet categorized.
Regards,
Vinay
On Wed, Apr 27, 2016 at 7:52 AM, vinay sharma <vinsharma.t...@gmail.com>
wrote:
> Hi Phil,
>
> This sounds great. Thanks for trying these serrings. This means probably
> something wrong in my code or setup. I will check what is causing this
found the issue. The Ops team deployed kafka 0.8.1 and all
> >>> my
> >> code was 0.9.0. Simple mistake and one that I should have thought of
> >> sooner. Once I had them bump up to the latest kafka all was well.
> >> Thank you for your help!
> >>>
Hi Phil,
This sounds great. Thanks for trying these serrings. This means probably
something wrong in my code or setup. I will check what is causing this
issue in my case.
I have a 3 broker 1 zk cluster and my topic has 3 partitions with
replication factor 3.
Regards,
Vinay Sharma
Hi Phil,
Config ConsumerConfig.METADATA_MAX_AGE_CONFIG has default 30 ms. This
config drives a mechanism where a proactive meta data refresh request is
issued by consumer periodically. i have seen that i get log about
successful heartbeat along with commit only before this request. once this
process.
>
> I guess a consumer rebalance will also trigger a metadata refresh but what
> else might?
>
> Thanks
> Phil Luckhurst
>
> -Original Message-
> From: vinay sharma [mailto:vinsharma.t...@gmail.com]
> Sent: 26 April 2016 13:24
> To: users@kafka.apache.org
of defect but
it seems something is fixed related to time reset of hearbeat task so that
next heatbeat request time is calculated correctly. From next version
commitSync will act as heartbeat as per the defect.
Regards,
Vinay Sharma
On Apr 26, 2016 4:53 AM, "Phil Luckhurst" <phil.luckhu..
committing on regular intervals (which sends heartbeat) this somehow does
not saves consumer from getting timeout during a meta refresh. This issue
does not happen if i am committing after each record that is between 2-4
seconds or if a commit happens tight after meta refresh response.
Regards,
Vinay Sharm
broker 1 zookeeper kafka setup. I ran test for more than
a minute and saw just once for both producers before their 1st send.
Regards,
Vinay Sharma
On Apr 22, 2016 3:15 PM, "Fumo, Vincent" <vincent_f...@cable.comcast.com>
wrote:
> Hi. I've not set that value. My producer proper
Generally a proactive metadata refresh request is sent by producer and
consumer every 5 minutes but this interval can be overriden with property "
metadata.max.age.ms" which has default value 30 i.e 5 minutes. Check if
you have set this property very low in your producer?
On Fri, Apr 22, 2016
by another consumer.
Regards,
Vinay Sharma
On Thu, Apr 21, 2016 at 2:09 PM, Phil Luckhurst <phil.luckhu...@encycle.com>
wrote:
> Thanks for all the responses. Unfortunately it seems that currently there
> is no fool proof solution to this. It's not a problem with the stor
never be processed if consumer crashes
while processing records which are already marked committed due to
rebalance.
Regards,
Vinay Sharma
regarding pause and resume approach, I think there will still be a chance
that you end up processing duplicate records. Rebalance can still get
triggered due to numerous reasons while you are processing records.
On Thu, Apr 21, 2016 at 10:34 AM, vinay sharma <vinsharma.t...@gmail.com>
wrote
I was also struggling with this problem. I have found one way to do it
without making consumers aware of each others processing or assignment
state. You can set autocommit to true. Irrespective of autocommit interval
setting autocommit true will make kafka commit all records already sent to
Hi Everyone,
I see that on each metadata refresh a rebalance is triggered and any
consumer in middle of a processing starts throwing errors like
"UNKNOWN_MEMBER_ID" on commit. There is no change in partitions or
leadership of partitions or brokers. Any idea what could cause this
behavior?
What
"REBALANCE_IN_PROGRESS"?
what is ideal way to deal with this?
Any pointers will be much appreciated.
Regards,
Vinay Sharma
17 matches
Mail list logo